modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Tonystorm23/bart-cnn-samsum-finetuned
|
Tonystorm23
| 2025-03-05T21:14:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-03-05T21:12:53Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
mkhalifa/qwen-1b-longthought
|
mkhalifa
| 2025-03-05T21:11:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T21:07:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrRobotoAI/C4-R
|
MrRobotoAI
| 2025-03-05T21:09:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:MrRobotoAI/A3",
"base_model:merge:MrRobotoAI/A3",
"base_model:MrRobotoAI/B2-R",
"base_model:merge:MrRobotoAI/B2-R",
"base_model:MrRobotoAI/C3",
"base_model:merge:MrRobotoAI/C3",
"base_model:MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K",
"base_model:merge:MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T15:56:35Z |
---
base_model:
- MrRobotoAI/A3
- MrRobotoAI/C3
- MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K
- MrRobotoAI/B2-R
library_name: transformers
tags:
- mergekit
- merge
---
# merge 11,168 REPEAT
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K](https://huggingface.co/MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A3](https://huggingface.co/MrRobotoAI/A3)
* [MrRobotoAI/C3](https://huggingface.co/MrRobotoAI/C3)
* [MrRobotoAI/B2-R](https://huggingface.co/MrRobotoAI/B2-R)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/A3
- model: MrRobotoAI/B2-R
- model: MrRobotoAI/C3
merge_method: model_stock
base_model: MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K
normalize: true
dtype: float16
```
|
ben832/mfluxhint
|
ben832
| 2025-03-05T21:09:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-03-05T03:05:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: interior design
output:
url: images/download.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: mit
---
# hint
<Gallery />
## Download model
[Download](/ben832/hint/tree/main) them in the Files & versions tab.
|
tiiuae/Falcon3-7B-Instruct-GGUF
|
tiiuae
| 2025-03-05T21:09:03Z | 1,072 | 12 |
transformers
|
[
"transformers",
"gguf",
"falcon3",
"text-generation",
"en",
"fr",
"es",
"pt",
"base_model:tiiuae/Falcon3-7B-Instruct",
"base_model:quantized:tiiuae/Falcon3-7B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-12-14T09:42:13Z |
---
language:
- en
- fr
- es
- pt
base_model:
- tiiuae/Falcon3-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- falcon3
---
<div align="center">
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
</div>
# Falcon3-7B-Instruct-GGUF
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
**Falcon3-7B-Instruct** achieves state-of-the-art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-7B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
This repository contains the GGUFs instruction-tuned 7B Falcon3 model.
## Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 28 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 32K context length
- 131K vocab size
- Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
- Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
## Getting started
### 1. Download GGUF models from hugging face
First, download the model from Hugging Face. You can use the `huggingface_hub` library or download it manually:
```bash
pip install huggingface_hub
huggingface-cli download {model_name}
```
This will download the model to your current directory. Make sure to replace {model_name} with the actual username and model name from your Hugging Face repository.
## 2. Install llama.cpp
You have several options for installing llama.cpp:
**1. Build from source:**
This gives you the most flexibility and control. Follow the instructions in the llama.cpp repository to build from source:
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release
```
For more information about how to build llama.cpp from source please refere to llama.cpp documentation on how to build from source: **[llama.cpp build from source](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)**.
**2. Download pre-built binaries:**
If you prefer a quicker setup, you can download pre-built binaries for your operating system. Check the llama.cpp repository for available binaries.
**3. Use Docker:**
For a more contained environment, you can use the official llama.cpp Docker image. Refer to the llama.cpp documentation for instructions on how to use the Docker image.
For detailed instructions and more information, please check the llama.cpp documentation on docker: **[llama.cpp docker](https://github.com/ggerganov/llama.cpp/blob/master/docs/docker.mdg)**.
### 3. Start playing with your model
Run simple text completion
```bash
llama-cli -m {path-to-gguf-model} -p "I believe the meaning of life is" -n 128
```
Run in conversation mode
```bash
llama-cli -m {path-to-gguf-model} -p "You are a helpful assistant" -cnv -co
```
## Useful links
- View our [release blogpost](https://huggingface.co/blog/falcon3).
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
## Technical Report
Coming soon....
## Citation
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
url = {https://huggingface.co/blog/falcon3},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}
```
|
myst72/Llama3-8B_MIFT-En_opencoder-edu_PIFT-EnJa_1000
|
myst72
| 2025-03-05T21:08:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T21:04:58Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sarahpann/safety_model
|
sarahpann
| 2025-03-05T21:08:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-03-05T21:08:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
irishprancer/41e80fd2-7ce8-4e12-b7cb-0873ff693b42
|
irishprancer
| 2025-03-05T21:02:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T18:27:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hashintha/test
|
Hashintha
| 2025-03-05T21:02:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-05T20:35:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MODEL
---
# Test
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MODEL` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Hashintha/test', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Lod34/Animator2D-v2
|
Lod34
| 2025-03-05T21:02:14Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sprite_generator",
"text-to-image",
"en",
"dataset:pawkanarek/spraix_1024",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-image
| 2025-03-02T21:41:31Z |
---
license: mit
datasets:
- pawkanarek/spraix_1024
language:
- en
base_model:
- google-t5/t5-base
metrics:
- mse
library_name: transformers
pipeline_tag: text-to-image
---
# 🎨 Animator2D
Animator2D is an AI-powered model designed to generate pixel-art sprite animations from textual descriptions. This model leverages a BERT-based text encoder to extract textual features and a convolutional generative network to create animated sprites. The goal is to provide game developers and artists with a tool that can bring character concepts to life with minimal effort.
## 🛠️ Model Overview
- **Name:** Animator2D
- **Input:**
- Character description
- Number of animation frames
- Character action
- Viewing direction
- **Output:** Animated sprite sheet in image format
## 📦 Dataset
The model was trained using the [spraix\_1024](https://huggingface.co/datasets/pawkanarek/spraix_1024) dataset, which contains animated sprites with detailed textual descriptions. This dataset serves as a foundation for training the model to generate high-quality, relevant sprites based on textual inputs.
## 🚀 Model Versions
Over time, several iterations of Animator2D have been developed, each improving on the previous version with different training strategies and hyperparameters. Below is a chronological overview of the versions created so far:
| Model Version | Description |
|----------------------|-------------|
| **Animator2D-v1** | The first full version developed in this project, utilizing a structured training approach with BERT for text encoding and a convolutional generator for sprite creation. |
| **Animator2D-mini-10e** | A simplified version trained with only 10 epochs, batch size of 8, learning rate of 1e-4, and image size of 64x64. |
| **Animator2D-mini-100e** | An extension of the mini-10e version, trained for 100 epochs for improved performance. |
| **Animator2D-mini-250e** | A more refined version with 250 epochs, batch size increased to 16, learning rate of 2e-4, and image resolution of 128x128. |
| **Animator2D-v2 (In Development)** | A new version being built from scratch with an entirely redesigned training process, aiming for better animation quality and efficiency. |
## 🔮 Future Goals
This is just the first iteration of Animator2D. Future updates will focus on refining and expanding its capabilities:
- **Multiple Output Formats**: Currently, the model generates a single sprite sheet. Future updates will enable exporting animations in various formats, including folders with individual frames, GIFs, and videos.
- **Frame Input Optimization**: The number of frames is currently manually defined. Improvements will include a more intuitive system that considers FPS and actual animation duration.
- **Model Refinement**: The current model is in an early stage. Future improvements will enhance sprite generation consistency and quality by optimizing the architecture and training dataset.
- **Sprite Size Customization**: A new input will allow users to specify the character height in pixels, dynamically adjusting the sprite’s artistic style. This will ensure greater flexibility, allowing for different art styles (e.g., Pokémon vs. Metal Slug aesthetics).
---
Animator2D is an exciting step toward AI-assisted sprite animation generation, and future versions will continue to push the boundaries of what’s possible in pixel-art automation! 🚀🎮
|
clairecat/DeepSeek-R1-Grading-0305
|
clairecat
| 2025-03-05T21:01:23Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T20:41:19Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** clairecat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/lumikabra-123B_v0.1-i1-GGUF
|
mradermacher
| 2025-03-05T21:01:18Z | 185 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"lumikabra-123B",
"en",
"base_model:schnapper79/lumikabra-123B_v0.1",
"base_model:quantized:schnapper79/lumikabra-123B_v0.1",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-03T08:27:52Z |
---
base_model: schnapper79/lumikabra-123B_v0.1
language:
- en
library_name: transformers
license: other
license_link: https://mistral.ai/licenses/MRL-0.1.md
license_name: mistral-ai-research-licence
quantized_by: mradermacher
tags:
- mergekit
- lumikabra-123B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/schnapper79/lumikabra-123B_v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/lumikabra-123B_v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 41.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 76.8 | |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/lumikabra-123B_v0.1-i1-GGUF/resolve/main/lumikabra-123B_v0.1.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
texanrangee/015b40fa-d9fd-4168-a482-463595be0be7
|
texanrangee
| 2025-03-05T21:00:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T18:56:38Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aalok013/flux_schnell
|
aalok013
| 2025-03-05T20:59:38Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:28:18Z |
---
license: apache-2.0
---
|
procit006/training_tts_nl_v7
|
procit006
| 2025-03-05T20:58:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-03-05T20:57:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hasso5703/QwQ-32B-Q4_0-GGUF
|
Hasso5703
| 2025-03-05T20:55:35Z | 0 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-05T20:54:05Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/QwQ-32B
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Hasso5703/QwQ-32B-Q4_0-GGUF
This model was converted to GGUF format from [`Qwen/QwQ-32B`](https://huggingface.co/Qwen/QwQ-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/QwQ-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hasso5703/QwQ-32B-Q4_0-GGUF --hf-file qwq-32b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hasso5703/QwQ-32B-Q4_0-GGUF --hf-file qwq-32b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hasso5703/QwQ-32B-Q4_0-GGUF --hf-file qwq-32b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hasso5703/QwQ-32B-Q4_0-GGUF --hf-file qwq-32b-q4_0.gguf -c 2048
```
|
Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL
|
Nexesenex
| 2025-03-05T20:55:19Z | 5 | 0 | null |
[
"safetensors",
"llama",
"base_model:pankajmathur/orca_mini_v9_6_1B-Instruct",
"base_model:finetune:pankajmathur/orca_mini_v9_6_1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-03-01T22:49:50Z |
---
license: llama3.2
base_model:
- pankajmathur/orca_mini_v9_6_1B-Instruct
---
# about
pankajmathur/orca_mini_v9_6_1B-Instruct abliterated with https://github.com/Orion-zhen/abliteration and the LPL (layer per layer) technique of Undi95.
For this model, it provokes a lesser alteration of its capacities than a single pass abliteration over the whole model.
|
mradermacher/Mistral-EuformiaV5-GGUF
|
mradermacher
| 2025-03-05T20:54:05Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ChrisMoreton/Mistral-EuformiaV5",
"base_model:quantized:ChrisMoreton/Mistral-EuformiaV5",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T20:25:38Z |
---
base_model: ChrisMoreton/Mistral-EuformiaV5
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ChrisMoreton/Mistral-EuformiaV5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-EuformiaV5-GGUF/resolve/main/Mistral-EuformiaV5.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlx-community/OLMoE-1B-7B-0125-6bit
|
mlx-community
| 2025-03-05T20:53:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"moe",
"olmo",
"mlx",
"en",
"dataset:allenai/OLMoE-mix-0924",
"dataset:allenai/dolmino-mix-1124",
"base_model:allenai/OLMoE-1B-7B-0125",
"base_model:quantized:allenai/OLMoE-1B-7B-0125",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"6-bit",
"region:us"
] |
text-generation
| 2025-03-05T20:01:27Z |
---
license: apache-2.0
language:
- en
tags:
- moe
- olmo
- olmoe
- mlx
co2_eq_emissions: 1
datasets:
- allenai/OLMoE-mix-0924
- allenai/dolmino-mix-1124
library_name: transformers
base_model: allenai/OLMoE-1B-7B-0125
---
# mlx-community/OLMoE-1B-7B-0125-6bit
The Model [mlx-community/OLMoE-1B-7B-0125-6bit](https://huggingface.co/mlx-community/OLMoE-1B-7B-0125-6bit) was
converted to MLX format from [allenai/OLMoE-1B-7B-0125](https://huggingface.co/allenai/OLMoE-1B-7B-0125)
using mlx-lm version **0.21.6**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OLMoE-1B-7B-0125-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
lebronzhang224/model
|
lebronzhang224
| 2025-03-05T20:50:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T20:49:49Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lebronzhang224
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/KellyLeBrock
|
Jonjew
| 2025-03-05T20:48:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T20:47:57Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Breathtaking medium shot photography of ohwx, A portrait of a woman with
voluminous, curly red hair against a vibrant pink background. She wears a
white turtleneck sweater with blue and white stripes on the sleeves. The
woman's gaze is direct and intense, and her lips are slightly parted. The
image has a contemporary style, emphasizing bold colors and a moody
atmosphere., smile, (upper body framing:1.3), sensual lips, eyelashes, fine
hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3),
realistic textures, (deep focus:1.1), negative space around subject, 8k uhd,
dslr, ultra high quality image, film grain, Fujifilm XT3
parameters:
negative_prompt: KellyLeBrock_flux_lora_v1_000002500_Weight-1.00
output:
url: >-
images/KellyLeBrock_flux_lora_v1_000002500_Weight-1.00_2025-02-22_2025-02-22-235005_0.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
license: unknown
---
# Kelly LeBrock (80s, Weird Science)(Flux)
<Gallery />
## Model description
FROM https://civitai.com/models/1285512/kelly-lebrock-80s-weird-scienceflux?modelVersionId=1450387
Trigger ohwx
Strength 1
👍 *** If you love it, like it! ***👍
workflow: https://civitai.com/models/1088678
👑 Kelly LeBrock (80s, Weird Science) 🎬
About my celebrities loras
90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.
I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.
The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.
This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).
Trained with ai-toolkit, so merging it is not easy.
To get the best result
Guidance: 2.2-3
Steps (dev): 30-40
daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75
Resolution: Upscale the latent by 1.25 or 1.5 you'll get awsome result. (take longer time but worth it)
Trigger word is (may work better in certain context): ohwx
Enjoy!
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/KellyLeBrock/tree/main) them in the Files & versions tab.
|
Jonjew/ShaniaTwain
|
Jonjew
| 2025-03-05T20:46:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T20:45:38Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Breathtaking over the shoulder shot photography of ohwx looking at viewer,
imperfections, necklace, looking at viewer, eyelashes, fine hair detail,
entire hairstyle visible, perfect eyes with iris pattern, sensual lips,
nose, (perfectly sharp:1.3), realistic textures, (deep focus:1.5), 8k uhd,
dslr, ultra high quality image, film grain, Fujifilm XT3
parameters:
negative_prompt: ShaniaTwain_flux_lora_v1_Weight-1.00
output:
url: >-
images/ShaniaTwain_flux_lora_v1_Weight-1.00_2025-02-08_2025-02-08-010944_0.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
license: unknown
---
# Shania Twain (singer)(Flux)
<Gallery />
## Model description
FROM https://civitai.com/models/1230283/shania-twain-singerflux
Trigger ohwx
Strength 1
👑 Shania Twain (singer) 🎤
About my celebrities loras
90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.
I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.
The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.
This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).
Trained with ai-toolkit, so merging it is not easy.
To get the best result
Guidance: 2.2-3
Steps (dev): 30-40
daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75
Resolution: Upscale the latent by 1.25 or 1.5 you'll get awsome result. (take longer time but worth it)
Trigger word is (may work better in certain context): ohwx
Enjoy!
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/ShaniaTwain/tree/main) them in the Files & versions tab.
|
Tonystorm23/gpt2-reuters-tokenizer
|
Tonystorm23
| 2025-03-05T20:45:34Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T20:45:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DJKPARIS/aliciatest2
|
DJKPARIS
| 2025-03-05T20:45:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-05T20:23:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DJKPARIS/aliciatest2
---
# Aliciatest2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DJKPARIS/aliciatest2` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DJKPARIS/aliciatest2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF
|
mradermacher
| 2025-03-05T20:45:07Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Ba2han/Qwen-2.5-7B-Woonderer-0.1",
"base_model:quantized:Ba2han/Qwen-2.5-7B-Woonderer-0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T19:45:17Z |
---
base_model: Ba2han/Qwen-2.5-7B-Woonderer-0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ba2han/Qwen-2.5-7B-Woonderer-0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Woonderer-0.1-GGUF/resolve/main/Qwen-2.5-7B-Woonderer-0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mshahoyi/qwen-model-diff-sleeper
|
mshahoyi
| 2025-03-05T20:41:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T17:46:54Z |
---
base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mshahoyi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
geoplus/task-5-Qwen-Qwen1.5-0.5B
|
geoplus
| 2025-03-05T20:40:10Z | 1,116 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2025-02-23T18:00:08Z |
---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF
|
mradermacher
| 2025-03-05T20:39:40Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:rubuntu/Llama-3.1-8B-Instruct-Jopara-V3.2",
"base_model:quantized:rubuntu/Llama-3.1-8B-Instruct-Jopara-V3.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T20:01:39Z |
---
base_model: rubuntu/Llama-3.1-8B-Instruct-Jopara-V3.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rubuntu/Llama-3.1-8B-Instruct-Jopara-V3.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Jopara-V3.2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Jopara-V3.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
agro-gpt/agrozeka
|
agro-gpt
| 2025-03-05T20:36:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:36:22Z |
---
license: apache-2.0
---
|
zhuchi76/vit-base-transfer-learning-oxford-pets
|
zhuchi76
| 2025-03-05T20:35:02Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-03-05T20:25:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jmalejandrob79/cndmrhr02
|
jmalejandrob79
| 2025-03-05T20:34:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-08T17:11:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cndmrntnh
---
# Cndmrntnh
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cndmrntnh` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/cndmrntnh', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ClaudioItaly/Exurbia-Delta9
|
ClaudioItaly
| 2025-03-05T20:32:55Z | 14 | 1 | null |
[
"safetensors",
"gemma2",
"arxiv:2306.01708",
"region:us"
] | null | 2025-02-25T17:06:18Z |
---
---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [ClaudioItaly/Vangelus-Secundus](https://huggingface.co/ClaudioItaly/Vangelus-Secundus) as a base.
### Models Merged
The following models were included in the merge:
* [ClaudioItaly/1852-9B](https://huggingface.co/ClaudioItaly/1852-9B)
* [spacematt/LinguaCraftica-9B](https://huggingface.co/spacematt/LinguaCraftica-9B)
* [sam-paech/Darkest-muse-v1](https://huggingface.co/sam-paech/Darkest-muse-v1)
* [sam-paech/Delirium-v1](https://huggingface.co/sam-paech/Delirium-v1)
* [ClaudioItaly/Pullulation-2-9B](https://huggingface.co/ClaudioItaly/Pullulation-2-9B)
* [sam-paech/Quill-v1](https://huggingface.co/sam-paech/Quill-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: spacematt/LinguaCraftica-9B
parameters:
weight: 0.35
- model: ClaudioItaly/1852-9B
parameters:
weight: 0.25
- model: ClaudioItaly/Pullulation-2-9B
parameters:
weight: 0.15
- model: sam-paech/Darkest-muse-v1
parameters:
weight: 0.10
- model: sam-paech/Delirium-v1
parameters:
weight: 0.10
- model: sam-paech/Quill-v1
parameters:
weight: 0.05
merge_method: ties
base_model: ClaudioItaly/Vangelus-Secundus
parameters:
density: 0.6
mask_threshold: 0.015
normalize: true
int8_mask: true
dtype: bfloat16
```
|
efficient-speech/lite-whisper-large-v3-turbo-acc
|
efficient-speech
| 2025-03-05T20:31:37Z | 36 | 2 |
transformers
|
[
"transformers",
"safetensors",
"lite-whisper",
"feature-extraction",
"audio",
"automatic-speech-recognition",
"whisper",
"hf-asr-leaderboard",
"custom_code",
"arxiv:2502.20583",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-02-26T04:22:23Z |
---
base_model: openai/whisper-large-v3-turbo
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- whisper
- hf-asr-leaderboard
---
# Model Card for Lite-Whisper large-v3-turbo-acc
<!-- Provide a quick summary of what the model is/does. -->
Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
## Benchmark Results
Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|-------|----------------|--------------|--------------|
| [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 10.1 | 635M | 907M |
| [lite-whisper-large-v3-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-acc) | 10.1 | 429M | 907M |
| [lite-whisper-large-v3](https://huggingface.co/efficient-speech/lite-whisper-large-v3) | 10.2 | 377M | 907M |
| [lite-whisper-large-v3-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-fast) | 11.3 | 308M | 907M |
| | | | |
| [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) | 10.1 | 635M | 172M |
| [lite-whisper-large-v3-turbo-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-acc) | 10.2 | 421M | 172M |
| [lite-whisper-large-v3-turbo](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo) | 12.6 | 374M | 172M |
| [lite-whisper-large-v3-turbo-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-fast) | 20.1 | 313M | 172M |
| | | | |
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 14.8 | 306M | 457M |
## Quick Start
The easiest way to run our model is to use our integration with HuggingFace Transformers library.
We provide model weights for the compressed version of OpenAI Whisper series [here](https://huggingface.co/efficient-speech).
```python
import librosa
import torch
from transformers import AutoProcessor, AutoModel
device = "cuda:0"
dtype = torch.float16
# load the compressed Whisper model
model = AutoModel.from_pretrained(
"efficient-speech/lite-whisper-large-v3-turbo",
trust_remote_code=True,
)
model.to(dtype).to(device)
# we use the same processor as the original model
processor = AutoProcessor.from_pretrained("openai/whisper-large-v3")
# set the path to your audio file
path = "path/to/audio.wav"
audio, _ = librosa.load(path, sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
input_features = input_features.to(dtype).to(device)
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(
predicted_ids,
skip_special_tokens=True
)[0]
print(transcription)
```
## Citation
If you use LiteASR in your research, please cite the following paper:
```
@misc{kamahori2025liteasrefficientautomaticspeech,
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
year={2025},
eprint={2502.20583},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.20583},
}
```
|
efficient-speech/lite-whisper-large-v3-turbo-fast
|
efficient-speech
| 2025-03-05T20:31:23Z | 41 | 2 |
transformers
|
[
"transformers",
"safetensors",
"lite-whisper",
"feature-extraction",
"audio",
"automatic-speech-recognition",
"whisper",
"hf-asr-leaderboard",
"custom_code",
"arxiv:2502.20583",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-02-26T04:29:10Z |
---
base_model: openai/whisper-large-v3-turbo
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- whisper
- hf-asr-leaderboard
---
# Model Card for Lite-Whisper large-v3-turbo-fast
<!-- Provide a quick summary of what the model is/does. -->
Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
## Benchmark Results
Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|-------|----------------|--------------|--------------|
| [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 10.1 | 635M | 907M |
| [lite-whisper-large-v3-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-acc) | 10.1 | 429M | 907M |
| [lite-whisper-large-v3](https://huggingface.co/efficient-speech/lite-whisper-large-v3) | 10.2 | 377M | 907M |
| [lite-whisper-large-v3-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-fast) | 11.3 | 308M | 907M |
| | | | |
| [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) | 10.1 | 635M | 172M |
| [lite-whisper-large-v3-turbo-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-acc) | 10.2 | 421M | 172M |
| [lite-whisper-large-v3-turbo](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo) | 12.6 | 374M | 172M |
| [lite-whisper-large-v3-turbo-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-fast) | 20.1 | 313M | 172M |
| | | | |
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 14.8 | 306M | 457M |
|
prithivMLmods/Magellanic-Llama3.3-43B-R999
|
prithivMLmods
| 2025-03-05T20:30:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"Sft",
"Llama3.3",
"conversational",
"en",
"zh",
"license:llama3.3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T06:40:27Z |
---
license: llama3.3
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- Sft
- Llama3.3
---

# **Magellanic-Llama3.3-43B-R999**
> Magellanic-Llama3.3-43B-R999 is based on the LLaMA 3.3 43B architecture, designed as an experimental model to test the limits of large-scale language processing. While it incorporates advanced techniques in long-context reasoning and multi-step problem-solving, its performance may vary significantly due to ongoing optimizations. This model is intended for research and development purposes rather than production use.
## **Key Characteristics**
1. **Experimental Performance**: While designed for high-capacity reasoning, this model may exhibit inconsistent behavior in certain tasks due to unoptimized fine-tuning.
2. **Limited Instruction Following**: Although it can process complex prompts, response accuracy and coherence may degrade in structured tasks.
3. **Context Sensitivity Issues**: While supporting extended input contexts up to 128K tokens, its ability to maintain consistency over long outputs is still being refined.
4. **Multilingual Support**: Supports multiple languages but may struggle with fluency and accuracy in non-English outputs.
5. **High Resource Consumption**: Due to its 43B parameters, it requires extensive computational resources, making it impractical for many standard applications.
## **Quickstart with transformers**
Here is an example of how to load the tokenizer and model using `apply_chat_template`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Magellanic-Llama3.3-43B-R999"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key challenges in training large-scale AI models?"
messages = [
{"role": "system", "content": "You are an experimental AI model designed for research purposes."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **Research & Experimentation**:
Designed to explore the limits of large-scale architectures, long-context retention, and reasoning.
2. **Development & Fine-Tuning Testing**:
Useful for testing adaptation strategies, optimization methods, and instruction tuning.
3. **Theoretical AI Studies**:
Can assist in analyzing the behavior of large models, particularly in multi-turn interactions and complex queries.
4. **Multilingual NLP Exploration**:
Serves as a testbed for multilingual understanding, though with inconsistent performance across languages.
5. **Extended Content Generation**:
Capable of generating lengthy responses but with a higher risk of logical errors and inconsistencies.
## **Limitations**
1. **Unstable Performance**:
As an experimental model, response quality may fluctuate significantly across tasks.
2. **High Computational Cost**:
Requires extensive resources to operate, making it difficult to deploy in production settings.
3. **Inconsistent Reasoning**:
May struggle with maintaining logical consistency in complex reasoning tasks.
4. **Bias & Hallucination Risks**:
Outputs may include factual inaccuracies, biases, or fabricated information.
5. **Limited Real-World Awareness**:
Does not have real-time knowledge beyond its training data.
6. **Prompt Dependence**:
Performance is highly sensitive to prompt structuring, with poorly framed prompts leading to degraded output quality.
|
volfyd/Qwen2.5-Coder-0.5B-Q8_0-GGUF
|
volfyd
| 2025-03-05T20:23:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B",
"base_model:quantized:Qwen/Qwen2.5-Coder-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-05T20:23:39Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-0.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
- llama-cpp
- gguf-my-repo
---
# volfyd/Qwen2.5-Coder-0.5B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-0.5B`](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo volfyd/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo volfyd/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo volfyd/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo volfyd/Qwen2.5-Coder-0.5B-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-q8_0.gguf -c 2048
```
|
awhiteside/CodeRankEmbed-Q8_0-GGUF
|
awhiteside
| 2025-03-05T20:22:30Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:nomic-ai/CodeRankEmbed",
"base_model:quantized:nomic-ai/CodeRankEmbed",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-03-05T20:22:28Z |
---
base_model: nomic-ai/CodeRankEmbed
library_name: sentence-transformers
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# awhiteside/CodeRankEmbed-Q8_0-GGUF
This model was converted to GGUF format from [`nomic-ai/CodeRankEmbed`](https://huggingface.co/nomic-ai/CodeRankEmbed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nomic-ai/CodeRankEmbed) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo awhiteside/CodeRankEmbed-Q8_0-GGUF --hf-file coderankembed-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo awhiteside/CodeRankEmbed-Q8_0-GGUF --hf-file coderankembed-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo awhiteside/CodeRankEmbed-Q8_0-GGUF --hf-file coderankembed-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo awhiteside/CodeRankEmbed-Q8_0-GGUF --hf-file coderankembed-q8_0.gguf -c 2048
```
|
TheBlueObserver/Llama-3.2-1B-Instruct__gr-r128-a128-epoch2
|
TheBlueObserver
| 2025-03-05T20:21:49Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-03-05T20:07:57Z |
# TheBlueObserver/Llama-3.2-1B-Instruct__gr-r128-a128-epoch2 Model Card
## LoRA Details
- **Rank**: 128
- **Alpha**: 128
## Training Details
- **Datasets**: gr_medical
- **Limit**: -1
- **Max Steps**: default
- **Epochs**: 2
|
Jonjew/DonnaMills
|
Jonjew
| 2025-03-05T20:21:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T20:21:40Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: donna-mills
output:
url: images/magicquill (3).png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: donna-mills
license: unknown
---
# Donna Mills (Flux) - Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1227593/donna-mills-flux-actress?modelVersionId=1383178
Trigger donna-mills
If you like this LoRA and generate some images, please share them here. It helps me learn what works and what does not!!!
There is no trigger word needed(all the samples were done without one). You can use 'donna-mills' if you want.
Donna Mills is an American actress best known for her role as Abby Cunningham on the hit primetime soap opera Knots Landing (1980–1989). She has had a long career in television and film, often portraying strong, glamorous, and sometimes scheming characters.
I create these LoRAs for less popular people I do not see represented by other creators.
Likes, shares, and buzz are always appreciated, as they help me decide whether to create similar ones or switch to other niche genres.
Gifting me buzz is great, but training is 99% done locally, so others could use it more.
## Trigger words
You should use `donna-mills` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/DonnaMills/tree/main) them in the Files & versions tab.
|
Clybius/chroma-debug-GGUF
|
Clybius
| 2025-03-05T20:21:23Z | 0 | 0 | null |
[
"gguf",
"base_model:lodestones/chroma-debug-development-only",
"base_model:quantized:lodestones/chroma-debug-development-only",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-03-05T20:12:45Z |
---
license: cc-by-nc-sa-4.0
base_model:
- lodestones/chroma-debug-development-only
---
as per the original repo:
all model listed in this repo it's purely for research purpose
once it's ready it will be uploaded to a separate repo under apache 2.0 license
|
manavgoel4/codeassitant-tinyllama-1b7
|
manavgoel4
| 2025-03-05T20:20:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T20:20:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sudhanshu-soft/myllama3_dpo_vllm_16bit
|
sudhanshu-soft
| 2025-03-05T20:20:08Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T20:10:53Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sudhanshu-soft
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FlorianJc/Phi-4-mini-instruct-vllm-fp8
|
FlorianJc
| 2025-03-05T20:19:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"vllm",
"fp8",
"conversational",
"custom_code",
"multilingual",
"ar",
"zh",
"cs",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"he",
"hu",
"it",
"ja",
"ko",
"no",
"pl",
"pt",
"ru",
"es",
"sv",
"th",
"tr",
"uk",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T20:07:28Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE
language:
- multilingual
- ar
- zh
- cs
- da
- nl
- en
- fi
- fr
- de
- he
- hu
- it
- ja
- ko
- 'no'
- pl
- pt
- ru
- es
- sv
- th
- tr
- uk
pipeline_tag: text-generation
tags:
- nlp
- code
- vllm
- fp8
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
quantized_by: FlorianJc
---
## Model infos:
FP8 quantized version of Phi-4-mini-instruct.
# Original model README.md file:
## Model Summary
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
📰 [Phi-4-mini Microsoft Blog](https://aka.ms/phi4-feb2025) <br>
📖 [Phi-4-mini Technical Report](https://aka.ms/phi-4-multimodal/techreport) <br>
👩🍳 [Phi Cookbook](https://github.com/microsoft/PhiCookBook) <br>
🏡 [Phi Portal](https://azure.microsoft.com/en-us/products/phi) <br>
🖥️ Try It [Azure](https://aka.ms/phi-4-mini/azure), [Huggingface](https://huggingface.co/spaces/microsoft/phi-4-mini) <br>
🎉**Phi-4**: [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)];
[[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]
## Intended Uses
### Primary Use Cases
The model is intended for broad multilingual commercial and research use. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially math and logic).
The model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
The model is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This release of Phi-4-mini-instruct is based on valuable user feedback from the Phi-3 series. The Phi-4-mini model employed new architecture for efficiency, larger vocabulary for multilingual support, and better post-training techniques were used for instruction following, function calling, as well as additional data leading to substantial gains on key capabilities. It is anticipated that most use cases will benefit from this release, but users are encouraged to test in their particular AI applications. The enthusiastic support for the Phi-4 series is greatly appreciated. Feedback on Phi-4-mini-instruct is welcomed and crucial to the model’s evolution and improvement.
### Model Quality
To understand the capabilities, the 3.8B parameters Phi-4-mini-instruct model was compared with a set of models over a variety of benchmarks using an internal benchmark platform (See Appendix A for benchmark methodology). A high-level overview of the model quality is as follows:
| Benchmark | Similar size | | | | |2x size | | | | | |
|----------------------------------|-------------|-------------------|-------------------|-------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| | Phi-4 mini-Ins | Phi-3.5-mini-Ins | Llama-3.2-3B-Ins | Mistral-3B | Qwen2.5-3B-Ins | Qwen2.5-7B-Ins | Mistral-8B-2410 | Llama-3.1-8B-Ins | Llama-3.1-Tulu-3-8B | Gemma2-9B-Ins | GPT-4o-mini-2024-07-18 |
| **Popular aggregated benchmark** | | | | | | | | | | | |
| Arena Hard | 32.8 | 34.4 | 17.0 | 26.9 | 32.0 | 55.5 | 37.3 | 25.7 | 42.7 | 43.7 | 53.7 |
| BigBench Hard (0-shot, CoT) | 70.4 | 63.1 | 55.4 | 51.2 | 56.2 | 72.4 | 53.3 | 63.4 | 55.5 | 65.7 | 80.4 |
| MMLU (5-shot) | 67.3 | 65.5 | 61.8 | 60.8 | 65.0 | 72.6 | 63.0 | 68.1 | 65.0 | 71.3 | 77.2 |
| MMLU-Pro (0-shot, CoT) | 52.8 | 47.4 | 39.2 | 35.3 | 44.7 | 56.2 | 36.6 | 44.0 | 40.9 | 50.1 | 62.8 |
| **Reasoning** | | | | | | | | | | | |
| ARC Challenge (10-shot) | 83.7 | 84.6 | 76.1 | 80.3 | 82.6 | 90.1 | 82.7 | 83.1 | 79.4 | 89.8 | 93.5 |
| BoolQ (2-shot) | 81.2 | 77.7 | 71.4 | 79.4 | 65.4 | 80.0 | 80.5 | 82.8 | 79.3 | 85.7 | 88.7 |
| GPQA (0-shot, CoT) | 25.2 | 26.6 | 24.3 | 24.4 | 23.4 | 30.6 | 26.3 | 26.3 | 29.9 | 39.1 | 41.1 |
| HellaSwag (5-shot) | 69.1 | 72.2 | 77.2 | 74.6 | 74.6 | 80.0 | 73.5 | 72.8 | 80.9 | 87.1 | 88.7 |
| OpenBookQA (10-shot) | 79.2 | 81.2 | 72.6 | 79.8 | 79.3 | 82.6 | 80.2 | 84.8 | 79.8 | 90.0 | 90.0 |
| PIQA (5-shot) | 77.6 | 78.2 | 68.2 | 73.2 | 72.6 | 76.2 | 81.2 | 83.2 | 78.3 | 83.7 | 88.7 |
| Social IQA (5-shot) | 72.5 | 75.1 | 68.3 | 73.9 | 75.3 | 75.3 | 77.6 | 71.8 | 73.4 | 74.7 | 82.9 |
| TruthfulQA (MC2) (10-shot) | 66.4 | 65.2 | 59.2 | 62.9 | 64.3 | 69.4 | 63.0 | 69.2 | 64.1 | 76.6 | 78.2 |
| Winogrande (5-shot) | 67.0 | 72.2 | 53.2 | 59.8 | 63.3 | 71.1 | 63.1 | 64.7 | 65.4 | 74.0 | 76.9 |
| **Multilingual** | | | | | | | | | | | |
| Multilingual MMLU (5-shot) | 49.3 | 51.8 | 48.1 | 46.4 | 55.9 | 64.4 | 53.7 | 56.2 | 54.5 | 63.8 | 72.9 |
| MGSM (0-shot, CoT) | 63.9 | 49.6 | 44.6 | 44.6 | 53.5 | 64.5 | 56.7 | 56.7 | 58.6 | 75.1 | 81.7 |
| **Math** | | | | | | | | | | | |
| GSM8K (8-shot, CoT) | 88.6 | 76.9 | 75.6 | 80.1 | 80.6 | 88.7 | 81.9 | 82.4 | 84.3 | 84.9 | 91.3 |
| MATH (0-shot, CoT) | 64.0 | 49.8 | 46.7 | 41.8 | 61.7 | 60.4 | 41.6 | 47.6 | 46.1 | 51.3 | 70.2 |
| **Overall** | **63.5** | **60.5** | **56.2** | **56.9** | **60.1** | **67.9** | **60.2** | **62.3** | **60.9** | **65.0** | **75.5** |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings.
## Usage
### Tokenizer
Phi-4-mini-instruct supports a vocabulary size of up to `200064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-4-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-4-mini-instruct
model is best suited for prompts using specific formats.
Below are the two primary formats:
#### Chat format
This format is used for general conversation and instructions:
```yaml
<|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|>
```
#### Tool-enabled function-calling format
This format is used when the user wants the model to provide function calls based on the given tools. The user should provide the available tools in the system prompt, wrapped by <|tool|> and <|/tool|> tokens. The tools should be specified in JSON format, using a JSON dump structure. Example:
`
<|system|>You are a helpful assistant with some tools.<|tool|>[{"name": "get_weather_updates", "description": "Fetches weather updates for a given city using the RapidAPI Weather API.", "parameters": {"city": {"description": "The name of the city for which to retrieve weather information.", "type": "str", "default": "London"}}}]<|/tool|><|end|><|user|>What is the weather like in Paris today?<|end|><|assistant|>
`
### Inference with vLLM
#### Requirements
List of required packages:
```
flash_attn==2.7.4.post1
torch==2.5.1
vllm>=0.7.3
```
#### Example
To perform inference using vLLM, you can use the following code snippet:
```python
from vllm import LLM, SamplingParams
llm = LLM(model="microsoft/Phi-4-mini-instruct", trust_remote_code=True)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
sampling_params = SamplingParams(
max_tokens=500,
temperature=0.0,
)
output = llm.chat(messages=messages, sampling_params=sampling_params)
print(output[0].outputs[0].text)
```
### Inference with Transformers
#### Requirements
Phi-4 family has been integrated in the `4.49.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Python 3.8 will work best.
List of required packages:
```
flash_attn==2.7.4.post1
torch==2.5.1
transformers==4.49.0
accelerate==1.3.0
```
Phi-4-mini-instruct is also available in [Azure AI Studio]()
#### Example
After obtaining the Phi-4-mini-instruct model checkpoints, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_path = "microsoft/Phi-4-mini-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 4 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: The majority of Phi 4 training data is based in Python and uses common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, it is strongly recommended that users manually verify all API uses.
+ Long Conversation: Phi 4 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift.
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi 4 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
+ **Architecture:** Phi-4-mini-instruct has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-mini, the major changes with Phi-4-mini-instruct are 200K vocabulary, grouped-query attention, and shared input and output embedding.<br>
+ **Inputs:** Text. It is best suited for prompts using the chat format.<br>
+ **Context length:** 128K tokens<br>
+ **GPUs:** 512 A100-80G<br>
+ **Training time:** 21 days<br>
+ **Training data:** 5T tokens<br>
+ **Outputs:** Generated text in response to the input<br>
+ **Dates:** Trained between November and December 2024<br>
+ **Status:** This is a static model trained on offline datasets with the cutoff date of June 2024 for publicly available data.<br>
+ **Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
+ **Release date:** February 2025<br>
### Training Datasets
Phi-4-mini’s training data includes a wide variety of sources, totaling 5 trillion tokens, and is a combination of
1) publicly available documents filtered for quality, selected high-quality educational data, and code
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (e.g., science, daily activities, theory of mind, etc.)
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Focus was placed on the quality of data that could potentially improve the reasoning ability for the model, and the publicly available documents were filtered to contain a preferred level of knowledge. As an example, the result of a game in premier league on a particular day might be good training data for frontier models, but such information was removed to leave more model capacity for reasoning for the model’s small size. More details about data can be found in the Phi-4-mini-instruct technical report.
The decontamination process involved normalizing and tokenizing the dataset, then generating and comparing n-grams between the target dataset and benchmark datasets. Samples with matching n-grams above a threshold were flagged as contaminated and removed from the dataset. A detailed contamination report was generated, summarizing the matched text, matching ratio, and filtered results for further analysis.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/sample_finetune.py).
## Safety Evaluation and Red-Teaming
Various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets were leveraged to evaluate Phi-4 models’ propensity to produce undesirable outputs across multiple languages and risk categories. Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety post-training that was done as detailed in the Phi 3 Safety Post-Training paper had a positive impact across multiple languages and risk categories as observed by refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Details on prior red team evaluations across Phi models can be found in the Phi 3 Safety Post-Training paper. For this release, the red team tested the model in English, Chinese, Japanese, Spanish, Portuguese, Arabic, Thai, and Russian for the following potential harms: Hate Speech and Bias, Violent Crimes, Specialized Advice, and Election Information. Their findings indicate that the model is resistant to jailbreak techniques across languages, but that language-specific attack prompts leveraging cultural context can cause the model to output harmful content. Another insight was that with function calling scenarios, the model could sometimes hallucinate function names or URL’s. The model may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages, and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-4-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A: Benchmark Methodology
We include a brief word on methodology here - and in particular, how we think about optimizing prompts.
In an ideal world, we would never change any prompts in our benchmarks to ensure it is always an apples-to-apples comparison when comparing different models. Indeed, this is our default approach, and is the case in the vast majority of models we have run to date.
There are, however, some exceptions to this. In some cases, we see a model that performs worse than expected on a given eval due to a failure to respect the output format. For example:
+ A model may refuse to answer questions (for no apparent reason), or in coding tasks models may prefix their response with “Sure, I can help with that. …” which may break the parser. In such cases, we have opted to try different system messages (e.g. “You must always respond to a question” or “Get to the point!”).
+ With some models, we observed that few shots actually hurt model performance. In this case we did allow running the benchmarks with 0-shots for all cases.
+ We have tools to convert between chat and completions APIs. When converting a chat prompt to a completion prompt, some models have different keywords e.g. Human vs User. In these cases, we do allow for model-specific mappings for chat to completion prompts.
However, we do not:
+ Pick different few-shot examples. Few shots will always be the same when comparing different models.
+ Change prompt format: e.g. if it is an A/B/C/D multiple choice, we do not tweak this to 1/2/3/4 multiple choice.
### Benchmark datasets
The model was evaluated across a breadth of public and internal benchmarks to understand the model’s capabilities under multiple tasks and conditions. While most evaluations use English, the leading multilingual benchmark was incorporated that covers performance in select languages. More specifically,
+ Reasoning:
+ Winogrande: commonsense reasoning around pronoun resolution
+ PIQA: physical commonsense reasoning around everyday situations
+ ARC-challenge: grade-school multiple choice science questions
+ GPQA: very hard questions written and validated by experts in biology, physics, and chemistry
+ MedQA: medical questions answering
+ Social IQA: social commonsense intelligence
+ BoolQ: natural questions from context
+ TruthfulQA: grounded reasoning
+ Language understanding:
+ HellaSwag: commonsense natural language inference around everyday events
+ ANLI: adversarial natural language inference
+ Function calling:
+ Berkeley function calling function and tool call
+ Internal function calling benchmarks
+ World knowledge:
+ TriviaQA: trivia question on general topics
+ Math:
+ GSM8K: grade-school math word problems
+ GSM8K Hard: grade-school math word problems with large values and some absurdity.
+ MATH: challenging competition math problems
+ Code:
+ HumanEval HumanEval+, MBPP, MBPP+: python coding tasks
+ LiveCodeBenh, LiveBench: contamination-free code tasks
+ BigCode Bench: challenging programming tasks
+ Spider: SQL query tasks
+ Internal coding benchmarks
+ Instructions following:
+ IFEval: verifiable instructions
+ Internal instructions following benchmarks
+ Multilingual:
+ MGSM: multilingual grade-school math
+ Multilingual MMLU and MMLU-pro
+ MEGA: multilingual NLP tasks
+ Popular aggregated datasets: MMLU, MMLU-pro, BigBench-Hard, AGI Eval
+ Multi-turn conversations:
+ Data generated by in-house adversarial conversation simulation tool
+ Single-turn trustworthiness evaluation:
+ DecodingTrust: a collection of trustworthiness benchmarks in eight different perspectives
+ XSTest: exaggerated safety evaluation
+ Toxigen: adversarial and hate speech detection
+ Red Team:
+ Responses to prompts provided by AI Red Team at Microsoft
|
GhaniHaider/Chatbot
|
GhaniHaider
| 2025-03-05T20:17:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-05T20:16:42Z |
%pip install --quiet --upgrade langchain-text-splitters langchain-community langgraph
from openai import OpenAI
!pip install streamlit
import streamlit as st
!pip install PyPDF2
import requests
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from PyPDF2 import PdfReader
import os
# Load and process the textbook
@st.cache_resource
def load_textbook():
pdf_url = "https://med.mui.ac.ir/sites/med/files/users/jarah-maghz/Handbook%20of%20Neurosurgery%208.pdf"
response = requests.get(pdf_url)
with open("textbook.pdf", "wb") as f:
f.write(response.content)
reader = PdfReader("textbook.pdf")
text = "".join([page.extract_text() for page in reader.pages if page.extract_text()])
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_text(text)
embeddings = OpenAIEmbeddings()
vector_store = FAISS.from_texts(texts, embeddings)
return vector_store
st.title("🩺 AI Health Assistant (RAG-powered)")
st.write(
"This AI-powered healthcare assistant provides general medical guidance using Retrieval-Augmented Generation (RAG)."
"\n⚠️ **Disclaimer:** This is not a substitute for professional medical advice."
)
openai_api_key = st.text_input("OpenAI API Key", type="password")
if not openai_api_key:
st.info("Please add your OpenAI API key to continue.", icon="🗝️")
else:
os.environ["OPENAI_API_KEY"] = openai_api_key
vector_store = load_textbook()
client = OpenAI(api_key=openai_api_key)
if "messages" not in st.session_state:
st.session_state.messages = [{"role": "system", "content": "You are a helpful healthcare assistant providing medical insights based on a neurosurgery textbook. Always advise users to consult a licensed medical professional."}]
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
if prompt := st.chat_input("Ask a health-related question..."):
st.session_state.messages.append({"role": "user", "content": prompt})
# Retrieve relevant information from the textbook
docs = vector_store.similarity_search(prompt, k=3)
retrieved_text = "\n".join([doc.page_content for doc in docs])
# Generate response with context
completion = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "Use the retrieved textbook information to answer the user's query."},
{"role": "user", "content": f"User question: {prompt}\nRelevant textbook info: {retrieved_text}"}
]
)
response_text = completion.choices[0].message.content
with st.chat_message("assistant"):
st.markdown(response_text)
st.session_state.messages.append({"role": "assistant", "content": response_text})
|
NikolaSigmoid/AceMath-1.5B-Instruct-dolphin-r1-200
|
NikolaSigmoid
| 2025-03-05T20:16:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"base_model:nvidia/AceMath-1.5B-Instruct",
"base_model:quantized:nvidia/AceMath-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-03-05T20:15:50Z |
---
base_model: nvidia/AceMath-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NikolaSigmoid
- **License:** apache-2.0
- **Finetuned from model :** nvidia/AceMath-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jordanfan/modernBERT_depression
|
jordanfan
| 2025-03-05T20:15:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:jordanfan/modernBERT_suicide_base",
"base_model:finetune:jordanfan/modernBERT_suicide_base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-05T19:28:12Z |
---
library_name: transformers
license: apache-2.0
base_model: jordanfan/modernBERT_suicide_base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: modernBERT_depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernBERT_depression
This model is a fine-tuned version of [jordanfan/modernBERT_suicide_base](https://huggingface.co/jordanfan/modernBERT_suicide_base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7165
- Accuracy: 0.7896
- Precision: 0.7903
- Recall: 0.7896
- F1: 0.7894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5289 | 1.0 | 969 | 0.4738 | 0.7903 | 0.7977 | 0.7903 | 0.7874 |
| 0.3411 | 2.0 | 1938 | 0.4775 | 0.7996 | 0.8023 | 0.7996 | 0.7995 |
| 0.1693 | 3.0 | 2907 | 0.7165 | 0.7896 | 0.7903 | 0.7896 | 0.7894 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
fats-fme/c83cc6c0-339e-48c2-b7d1-95c9c1272ff4
|
fats-fme
| 2025-03-05T20:14:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:02:36Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c83cc6c0-339e-48c2-b7d1-95c9c1272ff4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2b16fe95b587cb87_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2b16fe95b587cb87_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/c83cc6c0-339e-48c2-b7d1-95c9c1272ff4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 256
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2b16fe95b587cb87_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ae06967d-abc7-4ec2-a3c7-a7c4d81b67e8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ae06967d-abc7-4ec2-a3c7-a7c4d81b67e8
warmup_steps: 100
weight_decay: 0.05
xformers_attention: null
```
</details><br>
# c83cc6c0-339e-48c2-b7d1-95c9c1272ff4
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.3251 |
| 0.3617 | 0.0223 | 100 | 0.2937 |
| 0.2482 | 0.0446 | 200 | 0.2530 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/sft-apps-ds-7b-base-GGUF
|
mradermacher
| 2025-03-05T20:12:17Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ankner/sft-apps-ds-7b-base",
"base_model:quantized:ankner/sft-apps-ds-7b-base",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T18:42:20Z |
---
base_model: ankner/sft-apps-ds-7b-base
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ankner/sft-apps-ds-7b-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/sft-apps-ds-7b-base-GGUF/resolve/main/sft-apps-ds-7b-base.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
amuvarma/brian-luna-w_emotags-nowhisp
|
amuvarma
| 2025-03-05T20:07:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T19:29:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TFOCUS/king-v1_3
|
TFOCUS
| 2025-03-05T20:07:54Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-05T11:43:56Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kylielee505/mycontrolnetlite
|
kylielee505
| 2025-03-05T20:05:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"onnx",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-03-05T20:04:36Z |
---
license: cc-by-nc-sa-4.0
library_name: diffusers
---
Thank you for support my work.
<a href="https://www.buymeacoffee.com/bdsqlsz"><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a new graphics card&emoji=😋&slug=bdsqlsz&button_colour=40DCA5&font_colour=ffffff&font_family=Cookie&outline_colour=000000&coffee_colour=FFDD00" /></a>
https://www.buymeacoffee.com/bdsqlsz
Support list will show in main page.
# Support List
```
DiamondShark
Yashamon
t4ggno
Someone
kgmkm_mkgm
yacong
```
Pre-trained models and output samples of ControlNet-LLLite form bdsqlsz
# Inference with ComfyUI: https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI Not Controlnet Nodes!
For 1111's Web UI, [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) extension supports ControlNet-LLLite.
Training: https://github.com/kohya-ss/sd-scripts/blob/sdxl/docs/train_lllite_README.md
The recommended preprocessing for the animeface model is [Anime-Face-Segmentation](https://github.com/siyeong0/Anime-Face-Segmentation)
# Models
## Trained on anime model
AnimeFaceSegment、Normal、T2i-Color/Shuffle、lineart_anime_denoise、recolor_luminance
Base Model use[Kohaku-XL](https://civitai.com/models/136389?modelVersionId=150441)
MLSD
Base Model use[ProtoVision XL - High Fidelity 3D](https://civitai.com/models/125703?modelVersionId=144229)
# Japanese Introduction
https://note.com/kagami_kami/n/nf71099b6abe3
Thank kgmkm_mkgm for introducing these controlllite models and testing.
# Samples
## AnimeFaceSegmentV2




## DepthV2_(Marigold)




## MLSDV2






## Normal_Dsine




## T2i-Color/Shuffle






## Lineart_Anime_Denoise






## Recolor_Luminance






## Canny






## DW_OpenPose




## Tile_Anime




和其他模型不同,我需要简单解释一下tile模型的用法。
总的来说,tile模型有三个用法,
1、不输入任何提示词,它可以直接还原参考图的大致效果,然后略微重新修改局部细节,可以用于V2V。(图2)
2、权重设定为0.55~0.75,它可以保持原本构图和姿势的基础上,接受提示词和LoRA的修改。(图3)
3、使用配合放大效果,对每个tiling进行细节增加的同时保持一致性。(图4)
因为训练时使用的数据集为动漫2D/2.5D模型,所以目前对真实摄影风格的重绘效果并不好,需要等待完成最终版本。
Unlike other models, I need to briefly explain the usage of the tile model.
In general, there are three uses for the tile model,
1. Without entering any prompt words, it can directly restore the approximate effect of the reference image and then slightly modify local details. It can be used for V2V (Figure 2).
2. With a weight setting of 0.55~0.75, it can maintain the original composition and pose while accepting modifications from prompt words and LoRA (Figure 3).
3. Use in conjunction with magnification effects to increase detail for each tiling while maintaining consistency (Figure 4).
Since the dataset used during training is an anime 2D/2.5D model, currently, its repainting effect on real photography styles is not good; we will have to wait until completing its final version.

目前释放出了α和β两个版本,分别对应1、2以及1、3的用法。
其中α用于姿势、构图迁移,它的泛化性很强,可以和其他LoRA结合使用。
而β用于保持一致性和高清放大,它对条件图片更敏感。
好吧,α是prompt更重要的版本,而β是controlnet更重要的版本。
Currently, two versions, α and β, have been released, corresponding to the usage of 1、2 and 1、3 respectively.
The α version is used for pose and composition transfer, with strong generalization capabilities that can be combined with other LoRA systems.
On the other hand, the β version is used for maintaining consistency and high-definition magnification; it is more sensitive to conditional images.
In summary, α is a more important version for prompts while β is a more important version for controlnet.
## Tile_Realistic
Thank for all my supporter.
```
DiamondShark
Yashamon
t4ggno
Someone
kgmkm_mkgm
```
Even though I broke my foot last week, I still insisted on training the realistic version out.


You can compared with SD1.5 tile below here↓

For base model using juggernautXL series,so i recommend use their model or merge with it.
Here is comparing with other SDXL model.

|
TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B__gr-r128-a128-epoch2-Merged
|
TheBlueObserver
| 2025-03-05T20:04:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T20:01:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damonperpetuo/bot
|
damonperpetuo
| 2025-03-05T20:04:50Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-03-05T20:04:49Z |
---
license: other
license_name: teste
license_link: LICENSE
---
|
htdung167/qwen2-2b-instruct-trl-sft_7
|
htdung167
| 2025-03-05T20:04:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T08:58:56Z |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: qwen2-2b-instruct-trl-sft_7
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-2b-instruct-trl-sft_7
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="htdung167/qwen2-2b-instruct-trl-sft_7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/htdung167/qwen2-7b-instruct-trl-sft/runs/whq2vh87)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Joooorrit/023
|
Joooorrit
| 2025-03-05T20:04:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:04:05Z |
---
license: apache-2.0
---
|
TFOCUS/king-v1_1
|
TFOCUS
| 2025-03-05T20:04:02Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-05T11:43:47Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Tarek07/Dungeonmaster-Expanded-R1-LLaMa-70B
|
Tarek07
| 2025-03-05T20:03:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4",
"base_model:merge:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TareksLab/Genesis-R1-L3.3-70B",
"base_model:merge:TareksLab/Genesis-R1-L3.3-70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T06:21:34Z |
---
base_model:
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- SicariusSicariiStuff/Negative_LLAMA_70B
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- TheDrummer/Anubis-70B-v1
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- TareksLab/Genesis-R1-L3.3-70B
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---

Dungeonmaster is meant to be specifically for creative roleplays with stakes and consequences using the following curated models:
Dungeonmaster expanded features 2 extra models, bringing the total up to 7! Admittedly I was concerned about that many models in one single merge. But you never know, so I decided to try both and see...
# NB: I think the reasoning got too diluted, it works well as a normal model, but 'thinking' doesn't seem to work.
My ideal vision for Dungeonmaster were these 7 models.
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - A fine-tuned model specifically designed for this very application.
- ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3 - Another fine-tune trained on RP datasets.
- Sao10K/70B-L3.3-mhnnn-x1 - For some extra creativity
- TheDrummer/Anubis-70B-v1 - Another excellent RP fine-tune.
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - For it's strong descriptive writing.
- SicariusSicariiStuff/Negative_LLAMA_70B - To assist with the darker undertones.
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - The secret sauce, a completely unhinged thinking model that turns things up to 11.
# Mergekit
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using TareksLab/Genesis-R1-L3.3-70B as a base.
### Models Merged
The following models were included in the merge:
* ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
* SicariusSicariiStuff/Negative_LLAMA_70B
* LatitudeGames/Wayfarer-Large-70B-Llama-3.3
* TheDrummer/Anubis-70B-v1
* TheDrummer/Fallen-Llama-3.3-R1-70B-v1
* TareksLab/Genesis-R1-L3.3-70B
* EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- model: Sao10K/70B-L3.3-mhnnn-x1
- model: TheDrummer/Anubis-70B-v1
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
merge_method: della_linear
chat_template: llama3
base_model: TareksLab/Genesis-R1-L3.3-70B
parameters:
weight: 0.14
density: 0.7
epsilon: 0.2
lambda: 1.1
normalize: true
dtype: bfloat16
tokenizer:
source: TareksLab/Genesis-R1-L3.3-70B
```
|
Jonjew/AliciaWitt
|
Jonjew
| 2025-03-05T20:02:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T20:02:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/fluxcustomcelebrityalicia-witt.safetensors_250114173334_00001_MSI_Image_01.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Alicia Witt (Flux) - Televison and Movie Actress and Musician
<Gallery />
## Model description
FROM https://civitai.com/models/1144247/alicia-witt-flux-televison-and-movie-actress-and-musician?modelVersionId=1286906
If you like this LoRA and generate some images, please share them here. It helps me learn what works and what does not!!!
There is no trigger word needed(all the samples were done without one). You can use 'alicia-witt' if you want.
Alicia Witt is an American actress, singer-songwriter, and pianist known for her diverse career in film, television, and music. She has been recognized for her talent both as a performer and musician.
I create these LoRAs for less popular people I do not see represented by other creators.
Likes, shares, and buzz are always appreciated, as they help me decide whether to create similar ones or switch to other niche genres.
Gifting me buzz is great, but training is 99% done locally, so others could use it more.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AliciaWitt/tree/main) them in the Files & versions tab.
|
Krazeder/ppo-Pyramids-Training
|
Krazeder
| 2025-03-05T20:02:15Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-03-05T20:02:05Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Krazeder/ppo-Pyramids-Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mradermacher/Citrus1.0-Qwen-72B-GGUF
|
mradermacher
| 2025-03-05T20:01:59Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jdh-algo/Citrus1.0-Qwen-72B",
"base_model:quantized:jdh-algo/Citrus1.0-Qwen-72B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T13:31:00Z |
---
base_model: jdh-algo/Citrus1.0-Qwen-72B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jdh-algo/Citrus1.0-Qwen-72B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Citrus1.0-Qwen-72B-GGUF/resolve/main/Citrus1.0-Qwen-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fats-fme/6dedefe1-c1dc-411d-869b-76d9a102d085
|
fats-fme
| 2025-03-05T20:01:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"region:us"
] | null | 2025-03-05T19:05:41Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6dedefe1-c1dc-411d-869b-76d9a102d085
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 88b63c54aa23ac0e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/88b63c54aa23ac0e_train_data.json
type:
field_instruction: startphrase
field_output: gold-ending
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/6dedefe1-c1dc-411d-869b-76d9a102d085
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 256
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/88b63c54aa23ac0e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4fafcb3f-91d6-4849-bdfb-2b29ec81f6d8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4fafcb3f-91d6-4849-bdfb-2b29ec81f6d8
warmup_steps: 100
weight_decay: 0.05
xformers_attention: null
```
</details><br>
# 6dedefe1-c1dc-411d-869b-76d9a102d085
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 5.9167 |
| 3.1339 | 0.0091 | 100 | 3.0379 |
| 2.9525 | 0.0181 | 200 | 2.9537 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Shero448/hinata-ilu
|
Shero448
| 2025-03-05T19:59:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/prefect-illustrious-xl-v10-sdxl",
"base_model:adapter:John6666/prefect-illustrious-xl-v10-sdxl",
"region:us"
] |
text-to-image
| 2025-03-05T19:59:19Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1.png
base_model: John6666/prefect-illustrious-xl-v10-sdxl
instance_prompt: >-
hyuuga hinata, konohagakure symbol, long hair, blunt bangs, byakugan, white
eyes, no pupils
---
# hinata-ilu
<Gallery />
## Trigger words
You should use `hyuuga hinata` to trigger the image generation.
You should use `konohagakure symbol` to trigger the image generation.
You should use `long hair` to trigger the image generation.
You should use `blunt bangs` to trigger the image generation.
You should use `byakugan` to trigger the image generation.
You should use `white eyes` to trigger the image generation.
You should use `no pupils` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/hinata-ilu/tree/main) them in the Files & versions tab.
|
Jonjew/MarkiePost
|
Jonjew
| 2025-03-05T19:59:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T19:59:34Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/fluxcustomcelebritymarkie-post.safetensors_250105171608_00001_MSI_Image_04.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Markie Post (Flux) - Television Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1112393/markie-post-flux-television-actress
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/MarkiePost/tree/main) them in the Files & versions tab.
|
texanrangee/176823a6-90b1-45b4-97b7-dae585efea62
|
texanrangee
| 2025-03-05T19:58:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T18:17:14Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
h9art/Qwen2.5-Coder-3B-Instruct-100kSQL_finetuned
|
h9art
| 2025-03-05T19:58:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T19:57:53Z |
---
base_model: unsloth/Qwen2.5-Coder-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** h9art
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Fantale/VIBEZ
|
Fantale
| 2025-03-05T19:55:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-05T19:37:26Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VIBEZREMALGLASS
---
# Vibez
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VIBEZREMALGLASS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Fantale/VIBEZ', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Jonjew/TeaLeoni
|
Jonjew
| 2025-03-05T19:55:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T19:55:37Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/fluxcustomcelebritytea-leoni.safetensors_20250111232853_00002_MSI_Image_01.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Tea Leoni (Flux) - Television and Film Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1133472/tea-leoni-flux-television-and-film-actress?modelVersionId=1274303
If you like this LoRA and generate some images, please share them here. It helps me learn what works and what does not!!!
There is no trigger word needed(all the samples were done without one). You can use 'tea-leoni' if you want.
Téa Leoni is an American actress and producer. Known for her versatility and charm, she has starred in numerous television shows and films, ranging from comedies to dramas. Her career spans decades, and she remains a beloved figure in Hollywood.
I create these LoRAs for less popular people I do not see represented by other creators.
Likes, shares, and buzz are always appreciated, as they help me decide whether to create similar ones or switch to other niche genres.
Gifting me buzz is great, but training is 99% done locally, so others could use it more.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/TeaLeoni/tree/main) them in the Files & versions tab.
|
TheBlueObserver/Llama-3.2-1B-Instruct__gr-r128-a128-epoch1
|
TheBlueObserver
| 2025-03-05T19:54:29Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-03-05T19:54:16Z |
# TheBlueObserver/Llama-3.2-1B-Instruct__gr-r128-a128-epoch1 Model Card
## LoRA Details
- **Rank**: 128
- **Alpha**: 128
## Training Details
- **Datasets**: gr_medical
- **Limit**: -1
- **Max Steps**: default
- **Epochs**: 1
|
streaming-tv/DIRECT-Paris-SG-Liverpool-En-Direct-Streaming-Gratuit-tv
|
streaming-tv
| 2025-03-05T19:54:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-05T19:45:42Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/p7hzdsfd?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Paris Saint-Germain face à Liverpool commence le 5 mars 2025 à 20:00 UTC au Parc des Princes stade, Paris ville de, France. C'est un match de Ligue des champions de l'UEFA, Knockout Phase.
Sur le live Sofascore, vous trouverez les face à face entre Paris Saint-Germain et Liverpool. Sofascore est la meilleure façon de suivre ce match avec plein de fonctionnalités. Par exemple vous pouvez:
Sachez qui a marqué dans le match en direct
Obtenez les informations sur l'équipe dominant le match en utilisant Attack Momentum
Suivez les statistique détaillées comme la possession, les tirs, les corners, les grosses occasions, les cartons, les passes clés, les duels et plus
Suivez tous les matchs à domicile et à l'éxtérieur en Ligue des champions de l'UEFA, Knockout Phase
Regardez le favoris selon la communauté Sofascore.
Toutes ces fonctionnalités peuvent vous aider à faire votre prédiction entre Paris Saint-Germain et Liverpool. Bien que Sofascore ne vous permette pas de parier directement, vous y trouverez les meilleures cotes et sites de paris sportifs. Les cotes en direct de U-TV sont consultables sur la section live de Football .
Où regarder Paris Saint-Germain vs Liverpool ? Dans la section TV, vous trouverez la liste des chaînes diffusant Paris Saint-Germain – Liverpool en direct. Vous pouvez également voir le match via nos partenaires paris sportifs ou via les liens légaux sur Sofascore.
Détails de l'événement:
NOM: Paris Saint-Germain - Liverpool
DATE: 5 mars 2025
TEMPS: 20:00 UTC
STADE: Parc des Princes, Paris, France
Plus d'informations:
Paris Saint-Germain scores en direct , calendrier et résultats
Liverpool scores en direct , calendrier et résultats
Sofascore résultats en direct est disponible pour iPhone, iPad, Android (sur le Google Play Store) et pour Window phone. Vous pouvez nous retrouver dans différentes langues sur ces plateformes sous le même nom de "Sofascore". Installez l'application Sofascore et suivez Paris Saint-Germain Liverpool en direct sur votre mobile!
|
TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B__gr-r32-a32-epoch1-Merged
|
TheBlueObserver
| 2025-03-05T19:53:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T19:49:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gustavobaby/rrupyh
|
Gustavobaby
| 2025-03-05T19:52:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-05T19:52:16Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/GegYBhcW4AAwKhB.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# rrupyh
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Gustavobaby/rrupyh/tree/main) them in the Files & versions tab.
|
Bu-Guru-Salsa-Virals/Full.Video.Bu.Guru.Salsa.instagram.viral.video.Link.Original
|
Bu-Guru-Salsa-Virals
| 2025-03-05T19:51:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-05T19:49:50Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](http://tvnowgo.top/viral-tv/?V=Bu-Guru-Salsa)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](http://tvnowgo.top/viral-tv/?V=Bu-Guru-Salsa)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](http://tvnowgo.top/viral-tv/?V=Bu-Guru-Salsa)
|
Jonjew/ElizabethMontgomery
|
Jonjew
| 2025-03-05T19:51:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T19:51:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/fluxcustomcelebrityelizabeth-montgomery.safetensors_20250107191341_00002_MSI_Image_03.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Elizabeth Montgomery (Flux) - Television and Movie Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1119385/elizabeth-montgomery-flux-television-and-movie-actress
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/ElizabethMontgomery/tree/main) them in the Files & versions tab.
|
SpongeEngine/Amelia-SCE-12B-i1-GGUF
|
SpongeEngine
| 2025-03-05T19:51:03Z | 0 | 0 | null |
[
"gguf",
"SpongeQuant",
"i1-GGUF",
"en",
"base_model:yamatazen/Amelia-SCE-12B",
"base_model:quantized:yamatazen/Amelia-SCE-12B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-05T15:33:02Z |
---
base_model: yamatazen/Amelia-SCE-12B
language:
- en
license: mit
quantized_by: SpongeQuant
tags:
- SpongeQuant
- i1-GGUF
---
Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization.
<div style="display: flex; gap: 20px; align-items: center; margin-top:0;">
<a href="https://github.com/SpongeEngine/SpongeQuant">
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/github-button.png" width="173">
</a>
<a href="https://discord.gg/azNmr2Gdgy">
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/discord-button.png" width="173">
</a>
</div>
***
<figure>
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/093.png" alt="UN Building Day">
<figcaption>UN Building Day</figcaption>
</figure>
<figure>
<audio controls>
<source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/012.mp3" type="audio/mp3">
Your browser does not support the audio element.
</audio>
<figcaption>El Cascabel – Antonio Maciel and Los Aguilillas with Mariachi México de Pepe Villa / Rafael Carrión (Mexico, Unknown)</figcaption>
</figure>
***
### What is a GGUF?
GGUF is a file format used for running large language models (LLMs) on different types of computers. It supports both regular processors (CPUs) and graphics cards (GPUs), making it easier to run models across a wide range of hardware. Many LLMs require powerful and expensive GPUs, but GGUF improves compatibility and efficiency by optimizing how models are loaded and executed. If a GPU doesn't have enough memory, GGUF can offload parts of the model to the CPU, allowing it to run even when GPU resources are limited. GGUF is designed to work well with quantized models, which use less memory and run faster, making them ideal for lower-end hardware. However, it can also store full-precision models when needed. Thanks to these optimizations, GGUF allows LLMs to run efficiently on everything from high-end GPUs to laptops and even CPU-only systems.
### What is an i1-GGUF?
i1-GGUF is an enhanced type of GGUF model that uses imatrix quantization—a smarter way of reducing model size while preserving key details. Instead of shrinking everything equally, it analyzes the importance of different model components and keeps the most crucial parts more accurate. Like standard GGUF, i1-GGUF allows LLMs to run on various hardware, including CPUs and lower-end GPUs. However, because it prioritizes important weights, i1-GGUF models deliver better responses than traditional GGUF models while maintaining efficiency.
|
Joooorrit/002
|
Joooorrit
| 2025-03-05T19:46:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T19:46:16Z |
---
license: apache-2.0
---
|
hushhushhurr/Janiii
|
hushhushhurr
| 2025-03-05T19:42:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T19:42:59Z |
---
license: apache-2.0
---
|
KushGupster/QwQ-32B-Q4_K_M-GGUF
|
KushGupster
| 2025-03-05T19:42:35Z | 0 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-05T19:41:06Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/QwQ-32B
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# KushGupster/QwQ-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/QwQ-32B`](https://huggingface.co/Qwen/QwQ-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/QwQ-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KushGupster/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KushGupster/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KushGupster/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KushGupster/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -c 2048
```
|
nsugianto/detr-resnet50_finetuned_tower_towerv1wholeObjArea_lr1e-05_decay0.0001_ep250_bs8
|
nsugianto
| 2025-03-05T19:41:48Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"detr",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T14:19:21Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet50_finetuned_tower_towerv1wholeObjArea_lr1e-05_decay0.0001_ep250_bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50_finetuned_tower_towerv1wholeObjArea_lr1e-05_decay0.0001_ep250_bs8
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 3.3.2
- Tokenizers 0.19.1
|
texanrangee/3990b381-0ae4-40f7-9ee8-0dddf07245cd
|
texanrangee
| 2025-03-05T19:41:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T18:44:38Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jonjew/VanessaWilliams
|
Jonjew
| 2025-03-05T19:39:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T19:39:50Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: vanessawilliams
output:
url: images/1215-vanessawilliams-Fluxflux1-dev-fp8-579183929.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: vanessawilliams
license: unknown
---
# Vanessa Williams
<Gallery />
## Model description
FROM https://civitai.com/models/1291074/vanessa-williams?modelVersionId=1456932
Trigger vanessawilliams
This Lora was created with FluxGym, default options, rank 4
## Trigger words
You should use `vanessawilliams` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/VanessaWilliams/tree/main) them in the Files & versions tab.
|
Sapna-Shah-viral-Video/Full.Video.sapna.shah.instagram.viral.video.Link.Original
|
Sapna-Shah-viral-Video
| 2025-03-05T19:39:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-05T19:34:55Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](http://tvnowgo.top/viral-tv/?V=Sophie-Rain)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](http://tvnowgo.top/viral-tv/?V=Sophie-Rain)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](http://tvnowgo.top/viral-tv/?V=Sophie-Rain)
|
sudhanshu-soft/myllama3_dpo_vllm_4
|
sudhanshu-soft
| 2025-03-05T19:37:57Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-12-16T12:53:49Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sudhanshu-soft
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
caglarmert/vit-base-patch16-224-in21k-finetuned-lora-food101
|
caglarmert
| 2025-03-05T19:37:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-27T14:57:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bhavya777/dpo-sft-model
|
bhavya777
| 2025-03-05T19:34:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T19:33:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Krazeder/ppo-SnowballTarget
|
Krazeder
| 2025-03-05T19:34:35Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-03-05T19:34:28Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Krazeder/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Shero448/barghest-ilu
|
Shero448
| 2025-03-05T19:30:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/prefect-illustrious-xl-v10-sdxl",
"base_model:adapter:John6666/prefect-illustrious-xl-v10-sdxl",
"region:us"
] |
text-to-image
| 2025-03-05T19:29:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
masterpiece, best quality, very aesthetic, highres, absurdres,1girl,
<lora:realistic filter [IL]:1>, realistic,
<lora:StS-Illustrious-Detail-Slider-v1.0_1027785:1>,
<lora:barghest-illust:1>, fatebarghest, breasts, 1girl, blonde hair, fairy
knight gawain \(fate\), long hair, green eyes, horns, bangs, cleavage,
looking at viewer, white blouse, solo, heterochromia, smile, large breasts,
muscular female, pencil skirt, office, standing,
parameters:
negative_prompt: >-
worst quality, low quality,source_furry, source_pony, source_cartoon, 3d,
blurry, character_name, circle_name, commissioner_name, company_name,
completion_time, copyright_name, dated, group_name, logo, content_rating,
twitter_username, signature, character_signature, song_name, watermark,
web_address, weapon_name, (censored, text_background, text),
output:
url: images/00955.png
base_model: John6666/prefect-illustrious-xl-v10-sdxl
instance_prompt: fatebarghest, 1girl, fairy knight gawain \(fate\)
---
# barghest-ilu
<Gallery />
## Trigger words
You should use `fatebarghest` to trigger the image generation.
You should use `1girl` to trigger the image generation.
You should use `fairy knight gawain \(fate\)` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/barghest-ilu/tree/main) them in the Files & versions tab.
|
jorgefg03/xlm-roberta-base-500-bioautex
|
jorgefg03
| 2025-03-05T19:29:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-05T19:28:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
robiulawaldev/f272b32c-9172-4235-9127-a216621ff50a
|
robiulawaldev
| 2025-03-05T19:29:42Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-03-05T19:29:24Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
model-index:
- name: robiulawaldev/f272b32c-9172-4235-9127-a216621ff50a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/f272b32c-9172-4235-9127-a216621ff50a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Jonjew/MauraTierney
|
Jonjew
| 2025-03-05T19:29:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T19:29:33Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "<lora:Maura_Tierney_Ca2001:1> woman, smiling A Beautiful Princess, Smiling, Extremely Long Wavy Hair, Diamond Tiara, Silk Glowing High-Neck Gown, Thin Waist, High Heels, Light Particles Seem To Float All Around Their, Golden Hour, God Rays, Sunshine, Professional Photography, Magical Particles Are Floating In The Air, Bokeh, 80mm Lens, F 1/8, Depth Of Field.., Glow Effects, God Rays, Smoke Effects, Hand Drawn, 3d Octane Render, Cinema 4d, Blender, Dark, Atmospheric, Ultra Detailed, Sharp Focus, Big Depth Of Field, Masterpiece, Concept Art, Trending On Artstation, CG Unity, Trending On CGSociety, Dramatic, Professional Photo, 4k Wallpaper, Hyper Realistic, Vivid Colors, Extremely Detailed, 8k Wallpaper, Intricate, High Detail, Dramatic Lighting, High Contrast, Shadows, Highlights, Golden Hour, Backlighting, Sunbeams, God Rays <Lora:zz_s_Fluxartis:0.5> A Highly Detailed Cinematic Photography <Lora:zz_s_Stylish_Lighting:0.5>, Looking Directly At The Viewer, Centered, Body Perpendicular to Viewer, Looking Directly At The Camera, Making Eye Contact, Looking Straight Ahead, <lora:zz_s_Chest_Size_Slider:-2.5>â\x80\x8Bâ\x80\x8Bâ\x80\x8B"
output:
url: images/Maura_Tierney_Ca2001_0005.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: woman
license: unknown
---
# Maura Tierney (Ca 2001)
<Gallery />
## Model description
FROM https://civitai.com/models/1302388/maura-tierney-ca-2001?modelVersionId=1470069
Trigger woman
Strength 1
Maura Tierney is an award-winning American actress known for her versatile roles in both television and film. Born on February 3, 1965, in Boston, Massachusetts, she gained widespread recognition for her role as Lisa Miller on the sitcom "NewsRadio" (1995–1999) and as Dr. Abby Lockhart on the medical drama "ER" (1999–2009). Her performance on "ER" earned her an Emmy Award nomination.
Tierney has also appeared in numerous films, including "Primal Fear" (1996), "Liar Liar" (1997), "Primary Colors" (1998), "Forces of Nature" (1999), "Insomnia" (2002), "Baby Mama" (2008), "Beautiful Boy" (2018), and "The Report" (2019). She continues to captivate audiences with her performances and remains a prominent figure in the entertainment industry
## Trigger words
You should use `woman` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/MauraTierney/tree/main) them in the Files & versions tab.
|
lucasjca/Fine-Tunning-tiny-v1.0
|
lucasjca
| 2025-03-05T19:29:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:lorem-ipsum/dolor-sit-amet",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-05T19:29:03Z |
---
library_name: transformers
language:
- pt
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- lorem-ipsum/dolor-sit-amet
model-index:
- name: Whisper Tiny - Fala-Teste Revisado
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny - Fala-Teste Revisado
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Treinamento teste com dados revisados dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 250
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
mradermacher/Magnum-v1-72b-Qwen2.5-GGUF
|
mradermacher
| 2025-03-05T19:27:45Z | 120 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:gghfez/Magnum-v1-72b-Qwen2.5",
"base_model:quantized:gghfez/Magnum-v1-72b-Qwen2.5",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-04T05:04:09Z |
---
base_model: gghfez/Magnum-v1-72b-Qwen2.5
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gghfez/Magnum-v1-72b-Qwen2.5
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Magnum-v1-72b-Qwen2.5-GGUF/resolve/main/Magnum-v1-72b-Qwen2.5.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
peterkeating/pete-face-lora
|
peterkeating
| 2025-03-05T19:26:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-05T17:47:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PEK
---
# Pete Face Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PEK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('peterkeating/pete-face-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Jonjew/DonnaDixon
|
Jonjew
| 2025-03-05T19:23:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-05T19:23:31Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: donna-dixon
output:
url: >-
images/fluxcustomcelebritydonna-dixon.safetensors_20250205171136_00002_donna_dixon_Image_01.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: donna-dixon
license: unknown
---
# Donna Dixon (Flux) - Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1223865/donna-dixon-flux-actress?modelVersionId=1378920
Trigger donna-dixon
Strength 1
If you like this LoRA and generate some images, please share them here. It helps me learn what works and what does not!!!
There is no trigger word needed(all the samples were done without one). You can use 'donna-dixon' if you want.
Donna Dixon (born July 20, 1957) is an American actress and former beauty queen best known for her roles in 1980s comedy films and television. She gained recognition both for her acting career and for her marriage to comedian and actor Dan Aykroyd.
I create these LoRAs for less popular people I do not see represented by other creators.
Likes, shares, and buzz are always appreciated, as they help me decide whether to create similar ones or switch to other niche genres.
Gifting me buzz is great, but training is 99% done locally, so others could use it more.
## Trigger words
You should use `donna-dixon` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/DonnaDixon/tree/main) them in the Files & versions tab.
|
xgmab123/tp3b
|
xgmab123
| 2025-03-05T19:20:55Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T19:17:15Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xgmab123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClarenceDan/edbf47f4-b2e3-4ded-ae50-fa8922aec6f6
|
ClarenceDan
| 2025-03-05T19:20:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"region:us"
] | null | 2025-03-05T18:41:20Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: edbf47f4-b2e3-4ded-ae50-fa8922aec6f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 88b63c54aa23ac0e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/88b63c54aa23ac0e_train_data.json
type:
field_instruction: startphrase
field_output: gold-ending
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/edbf47f4-b2e3-4ded-ae50-fa8922aec6f6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/88b63c54aa23ac0e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4fafcb3f-91d6-4849-bdfb-2b29ec81f6d8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4fafcb3f-91d6-4849-bdfb-2b29ec81f6d8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# edbf47f4-b2e3-4ded-ae50-fa8922aec6f6
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.2745 | 0.0001 | 1 | 5.9168 |
| 5.2835 | 0.0003 | 3 | 5.9108 |
| 6.566 | 0.0005 | 6 | 5.7893 |
| 4.7511 | 0.0008 | 9 | 5.2067 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
phucfelix/FB-DLAI-Instruct-tune-v3
|
phucfelix
| 2025-03-05T19:20:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-03-05T19:17:45Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
texanrangee/6bdb4f38-928c-4657-833f-0da18e157450
|
texanrangee
| 2025-03-05T19:17:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T18:39:50Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xgmab123/tp3b-q4
|
xgmab123
| 2025-03-05T19:17:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T19:16:29Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xgmab123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
michaelosei/Metaevaluation
|
michaelosei
| 2025-03-05T19:16:30Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-03-05T19:16:30Z |
---
license: bigscience-bloom-rail-1.0
---
|
wwydmanski/specter2_pubmed-v0.7
|
wwydmanski
| 2025-03-05T19:13:09Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:57566",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:allenai/specter2_base",
"base_model:finetune:allenai/specter2_base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-05T10:12:19Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:57566
- loss:MultipleNegativesRankingLoss
base_model: allenai/specter2_base
widget:
- source_sentence: Cannabis evolution
sentences:
- 'The cannabis conundrum. '
- 'Dawn and decline of the holy smoke. '
- '[Computer-assisted system for interstitial hyperthermia]. '
- source_sentence: Lateral Ventricle AT/RT
sentences:
- 'Improved Assessment of Pathological Regurgitation in Patients with Prosthetic
Heart Valves by Multiplane Transesophageal Echocardiography. '
- '[Surgical anatomy of the lateral ventricles]. '
- 'Lateral Ventricle Atypical Teratoid/Rhabdoid Tumor (AT/RT): Case Report and Review
of Literature. '
- source_sentence: Parkinsonian motor fluctuations
sentences:
- 'Basic mechanisms of motor fluctuations. '
- 'Nonmotor Fluctuations in Parkinson''s Disease. '
- 'Sodium conductance in calcium channels of single smooth muscle cells of guinea-pig
taenia caeci. '
- source_sentence: Phagocytic Assay
sentences:
- 'Assay for phagocytosis. '
- 'Opsonophagocytic assay. '
- 'Clinical evaluation of synthetic aperture sequential beamforming ultrasound in
patients with liver tumors. '
- source_sentence: Content validity assessment
sentences:
- 'Content validity is naught. '
- 'Male requires a higher median target effect-site concentration of propofol for
I-gel placement when combined with dexmedetomidine. '
- 'Establishing content-validity of a disease-specific health-related quality of
life instrument for patients with chronic hypersensitivity pneumonitis. '
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on allenai/specter2_base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.04
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.22
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.3
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.04
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.06666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.044000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.03
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.18
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.27
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.15735897323110787
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.13194444444444445
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.13092350353731416
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.36
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.42
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.52
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.084
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.052000000000000005
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.36
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.42
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.52
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.35375176104312445
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.30138095238095236
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.31610409814616347
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.12000000000000001
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.28
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.32
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.41000000000000003
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.12000000000000001
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.064
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.041
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.115
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.27
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.31
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.395
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.25555536713711613
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.21666269841269842
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.22351380084173883
name: Cosine Map@100
---
# SentenceTransformer based on allenai/specter2_base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) <!-- at revision 3447645e1def9117997203454fa4495937bfbd83 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: PeftModelForFeatureExtraction
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Content validity assessment',
'Establishing content-validity of a disease-specific health-related quality of life instrument for patients with chronic hypersensitivity pneumonitis. ',
'Content validity is naught. ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoNQ` and `NanoMSMARCO`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoNQ | NanoMSMARCO |
|:--------------------|:-----------|:------------|
| cosine_accuracy@1 | 0.04 | 0.2 |
| cosine_accuracy@3 | 0.2 | 0.36 |
| cosine_accuracy@5 | 0.22 | 0.42 |
| cosine_accuracy@10 | 0.3 | 0.52 |
| cosine_precision@1 | 0.04 | 0.2 |
| cosine_precision@3 | 0.0667 | 0.12 |
| cosine_precision@5 | 0.044 | 0.084 |
| cosine_precision@10 | 0.03 | 0.052 |
| cosine_recall@1 | 0.03 | 0.2 |
| cosine_recall@3 | 0.18 | 0.36 |
| cosine_recall@5 | 0.2 | 0.42 |
| cosine_recall@10 | 0.27 | 0.52 |
| **cosine_ndcg@10** | **0.1574** | **0.3538** |
| cosine_mrr@10 | 0.1319 | 0.3014 |
| cosine_map@100 | 0.1309 | 0.3161 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.12 |
| cosine_accuracy@3 | 0.28 |
| cosine_accuracy@5 | 0.32 |
| cosine_accuracy@10 | 0.41 |
| cosine_precision@1 | 0.12 |
| cosine_precision@3 | 0.0933 |
| cosine_precision@5 | 0.064 |
| cosine_precision@10 | 0.041 |
| cosine_recall@1 | 0.115 |
| cosine_recall@3 | 0.27 |
| cosine_recall@5 | 0.31 |
| cosine_recall@10 | 0.395 |
| **cosine_ndcg@10** | **0.2556** |
| cosine_mrr@10 | 0.2167 |
| cosine_map@100 | 0.2235 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 57,566 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.4 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.98 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.3 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|
| <code>neutron camera autofocus</code> | <code>The autofocusing system of the IMAT neutron camera. </code> | <code>Robust autofocusing in microscopy. </code> |
| <code>Melanophore-stimulating hormone-melatonin antagonism</code> | <code>Melanophore-stimulating hormone-melatonin antagonism in relation to colour change in Xenopus laevis. </code> | <code>Melanin-concentrating hormone, melanocortin receptors and regulation of luteinizing hormone release. </code> |
| <code>Healthcare Reform Criticism</code> | <code>Experts critique doctors' ideas for reforming health care. </code> | <code>Healthcare reform? </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 3e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine_with_restarts
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine_with_restarts
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | NanoNQ_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------------:|:--------------------------:|:----------------------------:|
| 0 | 0 | - | 0.0633 | 0.2640 | 0.1636 |
| 0.0089 | 1 | 22.3889 | - | - | - |
| 0.0178 | 2 | 22.1875 | - | - | - |
| 0.0267 | 3 | 21.4657 | - | - | - |
| 0.0356 | 4 | 21.7306 | - | - | - |
| 0.0444 | 5 | 21.3965 | - | - | - |
| 0.0533 | 6 | 21.5539 | - | - | - |
| 0.0622 | 7 | 21.5853 | - | - | - |
| 0.0711 | 8 | 21.6282 | - | - | - |
| 0.08 | 9 | 21.2169 | - | - | - |
| 0.0889 | 10 | 21.1228 | - | - | - |
| 0.0978 | 11 | 20.7026 | - | - | - |
| 0.1067 | 12 | 21.2562 | - | - | - |
| 0.1156 | 13 | 21.1227 | - | - | - |
| 0.1244 | 14 | 20.6465 | - | - | - |
| 0.1333 | 15 | 20.5888 | - | - | - |
| 0.1422 | 16 | 20.2334 | - | - | - |
| 0.1511 | 17 | 20.6545 | - | - | - |
| 0.16 | 18 | 20.2517 | - | - | - |
| 0.1689 | 19 | 19.6825 | - | - | - |
| 0.1778 | 20 | 19.9251 | - | - | - |
| 0.1867 | 21 | 19.6937 | - | - | - |
| 0.1956 | 22 | 19.2779 | - | - | - |
| 0.2044 | 23 | 19.2927 | - | - | - |
| 0.2133 | 24 | 19.2895 | - | - | - |
| 0.2222 | 25 | 18.9854 | 0.1085 | 0.2978 | 0.2032 |
| 0.2311 | 26 | 18.5096 | - | - | - |
| 0.24 | 27 | 18.3789 | - | - | - |
| 0.2489 | 28 | 18.2159 | - | - | - |
| 0.2578 | 29 | 17.8306 | - | - | - |
| 0.2667 | 30 | 17.5964 | - | - | - |
| 0.2756 | 31 | 17.2527 | - | - | - |
| 0.2844 | 32 | 17.2274 | - | - | - |
| 0.2933 | 33 | 17.557 | - | - | - |
| 0.3022 | 34 | 17.4682 | - | - | - |
| 0.3111 | 35 | 16.9115 | - | - | - |
| 0.32 | 36 | 16.9938 | - | - | - |
| 0.3289 | 37 | 16.1648 | - | - | - |
| 0.3378 | 38 | 16.2908 | - | - | - |
| 0.3467 | 39 | 16.7883 | - | - | - |
| 0.3556 | 40 | 16.5278 | - | - | - |
| 0.3644 | 41 | 15.4466 | - | - | - |
| 0.3733 | 42 | 15.3954 | - | - | - |
| 0.3822 | 43 | 16.1363 | - | - | - |
| 0.3911 | 44 | 14.8857 | - | - | - |
| 0.4 | 45 | 15.5596 | - | - | - |
| 0.4089 | 46 | 15.6978 | - | - | - |
| 0.4178 | 47 | 14.6959 | - | - | - |
| 0.4267 | 48 | 15.0677 | - | - | - |
| 0.4356 | 49 | 14.4375 | - | - | - |
| 0.4444 | 50 | 15.0901 | 0.1348 | 0.3290 | 0.2319 |
| 0.4533 | 51 | 13.813 | - | - | - |
| 0.4622 | 52 | 14.3135 | - | - | - |
| 0.4711 | 53 | 14.9517 | - | - | - |
| 0.48 | 54 | 14.0599 | - | - | - |
| 0.4889 | 55 | 13.8699 | - | - | - |
| 0.4978 | 56 | 14.6277 | - | - | - |
| 0.5067 | 57 | 13.3742 | - | - | - |
| 0.5156 | 58 | 13.7985 | - | - | - |
| 0.5244 | 59 | 13.2972 | - | - | - |
| 0.5333 | 60 | 12.9836 | - | - | - |
| 0.5422 | 61 | 13.2035 | - | - | - |
| 0.5511 | 62 | 13.399 | - | - | - |
| 0.56 | 63 | 12.8694 | - | - | - |
| 0.5689 | 64 | 12.9775 | - | - | - |
| 0.5778 | 65 | 13.5685 | - | - | - |
| 0.5867 | 66 | 12.5359 | - | - | - |
| 0.5956 | 67 | 12.7989 | - | - | - |
| 0.6044 | 68 | 12.2337 | - | - | - |
| 0.6133 | 69 | 12.9103 | - | - | - |
| 0.6222 | 70 | 12.6319 | - | - | - |
| 0.6311 | 71 | 12.3662 | - | - | - |
| 0.64 | 72 | 12.4788 | - | - | - |
| 0.6489 | 73 | 12.7665 | - | - | - |
| 0.6578 | 74 | 12.7189 | - | - | - |
| 0.6667 | 75 | 11.6918 | 0.1558 | 0.3619 | 0.2588 |
| 0.6756 | 76 | 12.0761 | - | - | - |
| 0.6844 | 77 | 12.0588 | - | - | - |
| 0.6933 | 78 | 12.1507 | - | - | - |
| 0.7022 | 79 | 11.7982 | - | - | - |
| 0.7111 | 80 | 12.6278 | - | - | - |
| 0.72 | 81 | 12.1629 | - | - | - |
| 0.7289 | 82 | 11.9421 | - | - | - |
| 0.7378 | 83 | 12.1184 | - | - | - |
| 0.7467 | 84 | 11.9142 | - | - | - |
| 0.7556 | 85 | 12.1162 | - | - | - |
| 0.7644 | 86 | 12.2741 | - | - | - |
| 0.7733 | 87 | 11.8835 | - | - | - |
| 0.7822 | 88 | 11.8583 | - | - | - |
| 0.7911 | 89 | 11.74 | - | - | - |
| 0.8 | 90 | 12.0793 | - | - | - |
| 0.8089 | 91 | 11.6838 | - | - | - |
| 0.8178 | 92 | 11.6922 | - | - | - |
| 0.8267 | 93 | 11.9418 | - | - | - |
| 0.8356 | 94 | 12.2899 | - | - | - |
| 0.8444 | 95 | 12.0957 | - | - | - |
| 0.8533 | 96 | 12.0643 | - | - | - |
| 0.8622 | 97 | 12.3496 | - | - | - |
| 0.8711 | 98 | 12.3521 | - | - | - |
| 0.88 | 99 | 11.7082 | - | - | - |
| 0.8889 | 100 | 11.6085 | 0.1574 | 0.3538 | 0.2556 |
| 0.8978 | 101 | 11.7018 | - | - | - |
| 0.9067 | 102 | 11.8227 | - | - | - |
| 0.9156 | 103 | 12.5774 | - | - | - |
| 0.9244 | 104 | 11.465 | - | - | - |
| 0.9333 | 105 | 11.303 | - | - | - |
| 0.9422 | 106 | 11.8521 | - | - | - |
| 0.9511 | 107 | 11.6083 | - | - | - |
| 0.96 | 108 | 12.3972 | - | - | - |
| 0.9689 | 109 | 11.6962 | - | - | - |
| 0.9778 | 110 | 11.1335 | - | - | - |
| 0.9867 | 111 | 12.1325 | - | - | - |
| 0.9956 | 112 | 11.7444 | - | - | - |
</details>
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.3.1
- Transformers: 4.49.0
- PyTorch: 2.5.1
- Accelerate: 1.2.1
- Datasets: 2.19.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.