modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lesso04/de71102e-2e4e-44ca-93c5-21443af184fd
|
lesso04
| 2025-03-06T04:02:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T01:39:56Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: de71102e-2e4e-44ca-93c5-21443af184fd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# de71102e-2e4e-44ca-93c5-21443af184fd
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 0.7943 |
| 0.0389 | 0.4663 | 500 | 0.0386 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/760397d1-b3aa-48ba-92c9-8cf78908233c
|
lesso04
| 2025-03-06T04:02:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | 2025-03-05T23:19:35Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 760397d1-b3aa-48ba-92c9-8cf78908233c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 760397d1-b3aa-48ba-92c9-8cf78908233c
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 0.4513 |
| 0.0041 | 0.3885 | 500 | 0.0002 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/a3193b94-0f53-471b-89ac-2a2fb8f539ce
|
lesso04
| 2025-03-06T04:02:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:14:54Z |
---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a3193b94-0f53-471b-89ac-2a2fb8f539ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# a3193b94-0f53-471b-89ac-2a2fb8f539ce
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.5238 |
| 0.2874 | 0.0729 | 500 | 0.2566 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/0322f0be-3616-4a07-8cc3-156ffe562b53
|
lesso04
| 2025-03-06T04:02:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T18:44:50Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0322f0be-3616-4a07-8cc3-156ffe562b53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 0322f0be-3616-4a07-8cc3-156ffe562b53
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.3569 |
| 0.2098 | 0.4460 | 500 | 0.1947 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/6fcc5ad2-4434-419b-859a-ad0d23cf6349
|
lesso04
| 2025-03-06T04:01:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-03-05T13:35:30Z |
---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6fcc5ad2-4434-419b-859a-ad0d23cf6349
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 6fcc5ad2-4434-419b-859a-ad0d23cf6349
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.7147 |
| 0.6137 | 0.4655 | 500 | 0.6154 |
| 0.4502 | 0.9311 | 1000 | 0.4428 |
| 0.1818 | 1.3966 | 1500 | 0.4223 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
takedakoji00/Llama-3.1-8B-Instruct-custom-qg-7th_random_300_val_val_edit_distance
|
takedakoji00
| 2025-03-06T04:01:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T18:27:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso04/aa3b906f-f484-4edd-8009-7c09a73ff3ce
|
lesso04
| 2025-03-06T04:01:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T09:55:38Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa3b906f-f484-4edd-8009-7c09a73ff3ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# aa3b906f-f484-4edd-8009-7c09a73ff3ce
This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 1.5336 |
| 0.3665 | 0.3806 | 500 | 0.3578 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso03/8682f5c9-d5b4-4781-9e47-8685b4ce87b8
|
lesso03
| 2025-03-06T04:01:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:adapter:unsloth/gemma-1.1-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:38:22Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-1.1-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8682f5c9-d5b4-4781-9e47-8685b4ce87b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 8682f5c9-d5b4-4781-9e47-8685b4ce87b8
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 5.4800 |
| 1.9148 | 0.0987 | 500 | 1.9060 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso03/03d2a8ae-8158-4634-b4f7-03e596bf377a
|
lesso03
| 2025-03-06T04:01:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1",
"license:mit",
"region:us"
] | null | 2025-03-05T18:27:14Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 03d2a8ae-8158-4634-b4f7-03e596bf377a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 03d2a8ae-8158-4634-b4f7-03e596bf377a
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 1.2831 |
| 0.7544 | 0.3043 | 500 | 0.7553 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso03/fddad2f5-50b1-423c-8d59-e61f14b08db8
|
lesso03
| 2025-03-06T04:01:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-03-05T15:15:07Z |
---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fddad2f5-50b1-423c-8d59-e61f14b08db8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# fddad2f5-50b1-423c-8d59-e61f14b08db8
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8766 |
| 0.5187 | 0.0585 | 500 | 0.5154 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso02/b6da6e58-400d-4d05-82fd-99df31685aa6
|
lesso02
| 2025-03-06T04:01:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T01:40:01Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b6da6e58-400d-4d05-82fd-99df31685aa6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# b6da6e58-400d-4d05-82fd-99df31685aa6
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 0.7945 |
| 0.0419 | 0.4663 | 500 | 0.0401 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso02/af1244bf-9e22-4425-83a4-9ff753c4b5a7
|
lesso02
| 2025-03-06T04:01:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-03-05T15:15:56Z |
---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af1244bf-9e22-4425-83a4-9ff753c4b5a7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# af1244bf-9e22-4425-83a4-9ff753c4b5a7
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8768 |
| 0.5175 | 0.0585 | 500 | 0.5161 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso02/013addca-8225-4d70-af18-ef197d7fdf0e
|
lesso02
| 2025-03-06T04:00:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:lcw99/zephykor-ko-7b-chang",
"base_model:adapter:lcw99/zephykor-ko-7b-chang",
"region:us"
] | null | 2025-03-05T12:51:52Z |
---
library_name: peft
base_model: lcw99/zephykor-ko-7b-chang
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 013addca-8225-4d70-af18-ef197d7fdf0e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 013addca-8225-4d70-af18-ef197d7fdf0e
This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 3.2873 |
| 1.9358 | 0.5268 | 500 | 0.2344 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
texanrangee/dacdd58d-ee74-474f-9f64-1e82f56acc3e
|
texanrangee
| 2025-03-06T04:00:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T02:30:27Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso02/1cc21e3b-88bd-471e-832c-65c9a75403dd
|
lesso02
| 2025-03-06T04:00:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T10:55:26Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1cc21e3b-88bd-471e-832c-65c9a75403dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 1cc21e3b-88bd-471e-832c-65c9a75403dd
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 0.4856 |
| 1.5353 | 0.3806 | 500 | 0.1793 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso01/5b93413b-7a81-4dca-a6b8-a94e7e50fce7
|
lesso01
| 2025-03-06T04:00:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T21:46:25Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5b93413b-7a81-4dca-a6b8-a94e7e50fce7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 5b93413b-7a81-4dca-a6b8-a94e7e50fce7
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 7000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 1.2054 |
| 0.9425 | 0.3430 | 500 | 0.9315 |
| 0.9202 | 0.6860 | 1000 | 0.8849 |
| 0.8687 | 1.0291 | 1500 | 0.8518 |
| 0.8354 | 1.3721 | 2000 | 0.8317 |
| 0.8244 | 1.7151 | 2500 | 0.8181 |
| 0.7854 | 2.0581 | 3000 | 0.8035 |
| 0.7883 | 2.4011 | 3500 | 0.7945 |
| 0.7697 | 2.7441 | 4000 | 0.7866 |
| 0.7577 | 3.0872 | 4500 | 0.7821 |
| 0.7594 | 3.4302 | 5000 | 0.7772 |
| 0.7453 | 3.7732 | 5500 | 0.7731 |
| 0.7461 | 4.1163 | 6000 | 0.7755 |
| 0.7604 | 4.4593 | 6500 | 0.7726 |
| 0.7341 | 4.8023 | 7000 | 0.7737 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso01/f93efdee-b2f8-4324-ad9f-a7b75ceece9b
|
lesso01
| 2025-03-06T04:00:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:adapter:unsloth/gemma-1.1-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T20:38:29Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-1.1-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f93efdee-b2f8-4324-ad9f-a7b75ceece9b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# f93efdee-b2f8-4324-ad9f-a7b75ceece9b
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 5.4791 |
| 1.9431 | 0.0987 | 500 | 1.9344 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/Haphazardv1-i1-GGUF
|
mradermacher
| 2025-03-06T04:00:06Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Yoesph/Haphazardv1",
"base_model:quantized:Yoesph/Haphazardv1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-05T22:08:21Z |
---
base_model: Yoesph/Haphazardv1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Yoesph/Haphazardv1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Haphazardv1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Haphazardv1-i1-GGUF/resolve/main/Haphazardv1.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lesso01/4096dfb1-3a41-43c1-9d4e-fc6d98cbbadc
|
lesso01
| 2025-03-06T04:00:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-03-05T15:16:28Z |
---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4096dfb1-3a41-43c1-9d4e-fc6d98cbbadc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 4096dfb1-3a41-43c1-9d4e-fc6d98cbbadc
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8768 |
| 0.5133 | 0.0585 | 500 | 0.5155 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso01/5da7de8d-3f88-401a-a526-c1ec5d5911df
|
lesso01
| 2025-03-06T03:59:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-05T09:55:40Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5da7de8d-3f88-401a-a526-c1ec5d5911df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 5da7de8d-3f88-401a-a526-c1ec5d5911df
This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 1.5343 |
| 0.3809 | 0.3806 | 500 | 0.3598 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ASpiderSteeped/egirl
|
ASpiderSteeped
| 2025-03-06T03:59:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-06T03:57:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49d-9e3d71f5-ecbb-4009-bcd2-0429aab52308.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: egirl, ahegao
---
# egirl
<Gallery />
## Trigger words
You should use `egirl` to trigger the image generation.
You should use `ahegao` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ASpiderSteeped/egirl/tree/main) them in the Files & versions tab.
|
rohinm/model_top_p_5
|
rohinm
| 2025-03-06T03:57:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T03:55:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jonjew/ShuQi
|
Jonjew
| 2025-03-06T03:54:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-06T03:53:25Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Breathtaking over the shoulder shot photography of ohwx looking at viewer,
imperfections, necklace, looking over shoulders, eyelashes, fine hair
detail, entire hairstyle visible, perfect eyes with iris pattern, sensual
lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus, focus on
background:1.5), 8k uhd, dslr, ultra high quality image, film grain,
Fujifilm XT3
parameters:
negative_prompt: ShuQi_flux_lora_v1_000001500_Weight-1.1
output:
url: >-
images/ShuQi_flux_lora_v1_000001500_Weight-1.1_2024-12-06_2024-12-06-032508_0.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
license: unknown
---
# Shu Qi (Flux)
<Gallery />
## Model description
FROM https://civitai.com/models/1131170/shu-qi-flux?modelVersionId=1271648
Trigger ohwx
Strength 1
Guidance: 2.2-3
Steps (dev): 30-40
👍 *** If you love it, like it! ***👍
workflow: https://civitai.com/models/1088678
👑 Shu Qi 🎬
About my celebrities loras
90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.
I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.
The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.
This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).
Trained with ai-toolkit, so merging it is not easy.
To get the best result
Guidance: 2.2-3
Steps (dev): 30-40
daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75
Resolution: Upscale the latent by 1.25 or 1.5 you'll get awsome result. (take longer time but worth it)
Trigger word is (may work better in certain context): ohwx
Enjoy!
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/ShuQi/tree/main) them in the Files & versions tab.
|
John6666/alice-illustrious-v13-sdxl
|
John6666
| 2025-03-06T03:53:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"ntrmix",
"animagine",
"animagine4",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:finetune:Laxhar/noobai-XL-1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-03-06T03:44:40Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- ntrmix
- animagine
- animagine4
- illustrious
base_model:
- OnomaAIResearch/Illustrious-xl-early-release-v0
- Laxhar/noobai-XL-1.1
- cagliostrolab/animagine-xl-4.0
---
Original model is [here](https://civitai.com/models/1211538?modelVersionId=1495614).
This model created by [jmeswin10](https://civitai.com/user/jmeswin10).
|
fats-fme/a4050380-ed90-4d12-b757-d8c7da8595f0
|
fats-fme
| 2025-03-06T03:53:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-03-06T03:14:07Z |
---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4050380-ed90-4d12-b757-d8c7da8595f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 314e7b4648fe0f24_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/314e7b4648fe0f24_train_data.json
type:
field_input: init_response
field_instruction: prompt
field_output: revision_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/a4050380-ed90-4d12-b757-d8c7da8595f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 256
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 128
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/314e7b4648fe0f24_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4b6eb12a-d3f7-4e8d-838a-6f7133f4d322
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4b6eb12a-d3f7-4e8d-838a-6f7133f4d322
warmup_steps: 100
weight_decay: 0.05
xformers_attention: null
```
</details><br>
# a4050380-ed90-4d12-b757-d8c7da8595f0
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.5925 |
| 0.9069 | 0.0192 | 100 | 0.2619 |
| 0.9986 | 0.0383 | 200 | 0.2271 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
naimulislam/aurora-think3-GGUF
|
naimulislam
| 2025-03-06T03:52:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T03:17:59Z |
---
base_model: unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** naimulislam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_1K
|
mlfoundations-dev
| 2025-03-06T03:51:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:52:15Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_1K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_1K
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_1K dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
xrt89/SIAXRT1
|
xrt89
| 2025-03-06T03:51:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:51:07Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: SIAXRT
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# SIAXRT1
<Gallery />
## Model description
## Trigger words
You should use `SIAXRT` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/xrt89/SIAXRT1/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_1K
|
mlfoundations-dev
| 2025-03-06T03:51:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:51:31Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: instruction_filtering_scale_up_code_base_embedding_filter_mean_1K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# instruction_filtering_scale_up_code_base_embedding_filter_mean_1K
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_1K dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
huggingkot/Llama-3-8B-Instruct-abliterated-v2-bnb-4bit
|
huggingkot
| 2025-03-06T03:51:11Z | 0 | 0 | null |
[
"safetensors",
"base_model:cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2",
"base_model:finetune:cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2",
"8-bit",
"region:us"
] | null | 2025-03-06T03:50:24Z |
---
base_model:
- cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2
---
This is a converted weight from [Llama-3-8B-Instruct-abliterated-v2](https://huggingface.co/cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2) model in [unsloth 4-bit dynamic quant](https://archive.is/EFz7P) using this [collab notebook](https://colab.research.google.com/drive/1P23C66j3ga49kBRnDNlmRce7R_l_-L5l?usp=sharing).
## About this Conversion
This conversion uses **Unsloth** to load the model in **4-bit** format and force-save it in the same **4-bit** format.
### How 4-bit Quantization Works
- The actual **4-bit quantization** is handled by **BitsAndBytes (bnb)**, which works under **Torch** via **AutoGPTQ** or **BitsAndBytes**.
- **Unsloth** acts as a wrapper, simplifying and optimizing the process for better efficiency.
This allows for reduced memory usage and faster inference while keeping the model compact.
|
naimulislam/aurora-think3
|
naimulislam
| 2025-03-06T03:49:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T03:14:03Z |
---
base_model: unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** naimulislam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kenken6696/Llama-3.2-3B_3_mix_position_famous_unrecognized
|
kenken6696
| 2025-03-06T03:45:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T03:42:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huggingkot/IceNalyvkaRP-7b-bnb-4bit
|
huggingkot
| 2025-03-06T03:40:05Z | 0 | 0 | null |
[
"safetensors",
"base_model:icefog72/IceNalyvkaRP-7b",
"base_model:finetune:icefog72/IceNalyvkaRP-7b",
"8-bit",
"region:us"
] | null | 2025-03-06T03:38:52Z |
---
base_model:
- icefog72/IceNalyvkaRP-7b
---
This is a converted weight from [IceNalyvkaRP-7b](https://huggingface.co/icefog72/IceNalyvkaRP-7b) model in [unsloth 4-bit dynamic quant](https://archive.is/EFz7P) using this [collab notebook](https://colab.research.google.com/drive/1P23C66j3ga49kBRnDNlmRce7R_l_-L5l?usp=sharing).
## About this Conversion
This conversion uses **Unsloth** to load the model in **4-bit** format and force-save it in the same **4-bit** format.
### How 4-bit Quantization Works
- The actual **4-bit quantization** is handled by **BitsAndBytes (bnb)**, which works under **Torch** via **AutoGPTQ** or **BitsAndBytes**.
- **Unsloth** acts as a wrapper, simplifying and optimizing the process for better efficiency.
This allows for reduced memory usage and faster inference while keeping the model compact.
|
xxhe/sft-Llama-2-13b-chat-hf
|
xxhe
| 2025-03-06T03:39:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-05T04:43:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xrt89/SIAXRT
|
xrt89
| 2025-03-06T03:36:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:36:06Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt:
license: other
---
# SIAXRT
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/xrt89/SIAXRT/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-portrait-trainer](https://fal.ai/models/fal-ai/flux-lora-portrait-trainer).
|
mradermacher/jais-family-13b-chat-i1-GGUF
|
mradermacher
| 2025-03-06T03:34:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"jais-family",
"ar",
"en",
"base_model:inceptionai/jais-family-13b-chat",
"base_model:quantized:inceptionai/jais-family-13b-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-06T02:25:41Z |
---
base_model: inceptionai/jais-family-13b-chat
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
- jais-family
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/inceptionai/jais-family-13b-chat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/jais-family-13b-chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ1_S.gguf) | i1-IQ1_S | 4.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ1_M.gguf) | i1-IQ1_M | 4.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ2_S.gguf) | i1-IQ2_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q2_K.gguf) | i1-Q2_K | 5.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ3_S.gguf) | i1-IQ3_S | 6.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q4_0.gguf) | i1-Q4_0 | 7.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q4_1.gguf) | i1-Q4_1 | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-13b-chat-i1-GGUF/resolve/main/jais-family-13b-chat.i1-Q6_K.gguf) | i1-Q6_K | 11.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
huggingkot/Lumimaid-v0.2-8B-bnb-4bit
|
huggingkot
| 2025-03-06T03:34:28Z | 0 | 0 | null |
[
"safetensors",
"base_model:NeverSleep/Lumimaid-v0.2-8B",
"base_model:finetune:NeverSleep/Lumimaid-v0.2-8B",
"8-bit",
"region:us"
] | null | 2025-03-06T03:32:35Z |
---
base_model:
- NeverSleep/Lumimaid-v0.2-8B
---
This is a converted weight from [Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) model in [unsloth 4-bit dynamic quant](https://archive.is/EFz7P) using this [collab notebook](https://colab.research.google.com/drive/1P23C66j3ga49kBRnDNlmRce7R_l_-L5l?usp=sharing).
## About this Conversion
This conversion uses **Unsloth** to load the model in **4-bit** format and force-save it in the same **4-bit** format.
### How 4-bit Quantization Works
- The actual **4-bit quantization** is handled by **BitsAndBytes (bnb)**, which works under **Torch** via **AutoGPTQ** or **BitsAndBytes**.
- **Unsloth** acts as a wrapper, simplifying and optimizing the process for better efficiency.
This allows for reduced memory usage and faster inference while keeping the model compact.
|
EndersonPro/3nd3rs0nVizc4
|
EndersonPro
| 2025-03-06T03:34:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:22:23Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 3nd3rs0nVizc4
---
# 3Nd3Rs0Nvizc4
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `3nd3rs0nVizc4` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('EndersonPro/3nd3rs0nVizc4', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Dracones/QwQ-32B_exl2_6.0bpw
|
Dracones
| 2025-03-06T03:32:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"exl2",
"conversational",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"region:us"
] |
text-generation
| 2025-03-06T03:28:28Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/QwQ-32B
tags:
- chat
- exl2
library_name: transformers
---
# QwQ-32B - EXL2 6.0bpw
This is a 6.0bpw EXL2 quant of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
Details about the model can be found at the above model page.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 6.4393 |
| 7.0 | 6.4452 |
| 6.0 | 6.4693 |
| 5.0 | 6.4732 |
| 4.5 | 6.5417 |
| 4.0 | 6.6190 |
|
jimbowyer123/OtterCountdown
|
jimbowyer123
| 2025-03-06T03:32:25Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-02-04T23:15:31Z |
---
license: mit
tags:
- unsloth
---
|
ClarenceDan/3763e69c-6347-467f-9f94-4a370c10951c
|
ClarenceDan
| 2025-03-06T03:31:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-03-06T03:15:39Z |
---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3763e69c-6347-467f-9f94-4a370c10951c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 314e7b4648fe0f24_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/314e7b4648fe0f24_train_data.json
type:
field_input: init_response
field_instruction: prompt
field_output: revision_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/3763e69c-6347-467f-9f94-4a370c10951c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/314e7b4648fe0f24_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4b6eb12a-d3f7-4e8d-838a-6f7133f4d322
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4b6eb12a-d3f7-4e8d-838a-6f7133f4d322
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3763e69c-6347-467f-9f94-4a370c10951c
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7482 | 0.0002 | 1 | 0.5930 |
| 2.1758 | 0.0006 | 3 | 0.5899 |
| 2.115 | 0.0011 | 6 | 0.5544 |
| 2.0178 | 0.0017 | 9 | 0.4484 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
d8912046/lora_model_test7
|
d8912046
| 2025-03-06T03:31:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T03:19:43Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** d8912046
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zakariamtl/mustaphablidry
|
zakariamtl
| 2025-03-06T03:29:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:29:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Mustaphablidry
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zakariamtl/mustaphablidry', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RRoy233/Qwen2.5-7B-Instruct-inter-gsm8k-0305_222603
|
RRoy233
| 2025-03-06T03:29:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T03:26:43Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RRoy233
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
casque/zy_Realism_Enhancer_v2
|
casque
| 2025-03-06T03:29:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-03-06T03:24:53Z |
---
license: creativeml-openrail-m
---
|
zakariamtl/sihamezzerhouni
|
zakariamtl
| 2025-03-06T03:27:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:27:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Sihamezzerhouni
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zakariamtl/sihamezzerhouni', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
TareksLab/L3.3-TRP-BASE-50-70B
|
TareksLab
| 2025-03-06T03:26:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:33:51Z |
---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- Sao10K/L3-70B-Euryale-v2.1
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base.
### Models Merged
The following models were included in the merge:
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- model: Sao10K/L3-70B-Euryale-v2.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
select_topk: 0.50
dtype: float32
out_dtype: bfloat16
tokenizer:
source: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
```
|
pocodev/Survey-Gen-DSR1-8b-1-Merged
|
pocodev
| 2025-03-06T03:25:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T03:25:05Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pocodev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ontocord/wide_3b_sft_stage1.1-ss1-with_generics_intr_math_stories.no_issue
|
ontocord
| 2025-03-06T03:25:22Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T20:14:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nerva1228/fengtu
|
Nerva1228
| 2025-03-06T03:24:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:24:38Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: fengtu
---
# Fengtu
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `fengtu` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/fengtu', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ontocord/wide_3b_sft_stage1.1-ss1-with_generics_intr_math.no_issue
|
ontocord
| 2025-03-06T03:24:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T03:19:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dracones/QwQ-32B_exl2_8.0bpw
|
Dracones
| 2025-03-06T03:23:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"exl2",
"conversational",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-03-06T03:19:11Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/QwQ-32B
tags:
- chat
- exl2
library_name: transformers
---
# QwQ-32B - EXL2 8.0bpw
This is a 8.0bpw EXL2 quant of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
Details about the model can be found at the above model page.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 6.4393 |
| 7.0 | 6.4452 |
| 6.0 | 6.4693 |
| 5.0 | 6.4732 |
| 4.5 | 6.5417 |
| 4.0 | 6.6190 |
|
tona3738/qlora_test_model
|
tona3738
| 2025-03-06T03:23:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-03-06T03:20:17Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yorkinjon/whisper-small-uzbek-ynv2
|
Yorkinjon
| 2025-03-06T03:21:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"uz",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-05T04:38:37Z |
---
library_name: transformers
language:
- uz
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small uz - Yorkerdev
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: uz
split: test
args: 'config: uz, split: test'
metrics:
- name: Wer
type: wer
value: 34.6134644666883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small uz - Yorkerdev
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
- Wer: 34.6135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.6038 | 0.2640 | 1000 | 0.5630 | 48.7719 |
| 0.4917 | 0.5279 | 2000 | 0.4511 | 40.7366 |
| 0.4377 | 0.7919 | 3000 | 0.4073 | 37.9496 |
| 0.3151 | 1.0557 | 4000 | 0.3867 | 38.4776 |
| 0.2944 | 1.3197 | 5000 | 0.3679 | 35.5133 |
| 0.275 | 1.5836 | 6000 | 0.3621 | 34.6135 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu118
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Dracones/EXL2_Measurements
|
Dracones
| 2025-03-06T03:21:12Z | 0 | 4 | null |
[
"region:us"
] | null | 2024-04-02T18:48:18Z |
# Midnight Miqu EXL2 Measurement Files
This repository contains EXL2 measurement files for quants made here.
## Contents
| Filename | Description | Model Link |
|---------------------------------|---------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
| `midnight70b.json` | EXL2 measurement file for Midnight Miqu 70b | [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) |
| `midnight103b.json` | EXL2 measurement file for Midnight Miqu 103b | [sophosympatheia/Midnight-Miqu-103B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-103B-v1.0) |
| `midnight70b-v15.json` | EXL2 measurement file for Midnight Miqu 70b v1.5 | [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) |
| `Merged-RP-Stew-V2-34B.json` | EXL2 measurement file for Merged RP Stew V2 34B | [ParasiticRogue/Merged-RP-Stew-V2-34B](https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B) |
| `perky70b.json` | EXL2 measurement file for Perky 70b | [Dracones/perky-70b-v0.1](https://huggingface.co/Dracones/perky-70b-v0.1) |
| `perky103b.json` | EXL2 measurement file for Perky 103b | [Dracones/perky-103b-v0.1](https://huggingface.co/Dracones/perky-103b-v0.1) |
| `miqu-1-70b-sf.json` | EXL2 measurement file for miqu-1-70b-sf | [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) |
| `c4ai-command-r-v01.json` | EXL2 measurement file for Cohere Command R | [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) |
| `c4ai-command-r-v01-rpcal.json` | EXL2 measurement file for Cohere Command R, RP Calibrated again PIPPA-cleaned | [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) |
| `Mixtral-8x22B-v0.1.json` | EXL2 measurement file for Mixtral 8x22 v0.1 | [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) |
| `mixtral-8x22b-instruct-oh.json` | EXL2 measurement file for mixtral-8x22b-instruct-oh | [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) |
| `WizardLM-2-8x22B.json` | EXL2 measurement file for WizardLM-2-8x22B | [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B) |
| `CodeQwen1.5-7B-Chat.json` | EXL2 measurement file for CodeQwen1.5-7B-Chat | [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) |
| `CodeQwen1.5-7B.json` | EXL2 measurement file for CodeQwen1.5-7B | [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B) |
| `Mixtral-8x22B-Instruct-v0.1.json` | EXL2 measurement file for Mixtral-8x22B-Instruct-v0.1 | [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) |
| `Llama-3-Lumimaid-70B-v0.1.json` | EXL2 measurement file for Llama-3-Lumimaid-70B-v0.1 | [NeverSleep/Llama-3-Lumimaid-70B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) |
| `Qwen2.5-72B-Instruct.json` | EXL2 measurement file for Qwen2.5-72B-Instruct | [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
| `Qwen2.5-Coder-32B-Instruct.json` | EXL2 measurement file for Qwen2.5-Coder-32B-Instruct | [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) |
| `QwQ-32B-Preview.json` | EXL2 measurement file for QwQ-32B-Preview | [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) |
| `Athene-V2-Chat.json` | EXL2 measurement file for Athene-V2-Chat | [Nexusflow/Athene-V2-Chat](https://huggingface.co/Nexusflow/Athene-V2-Chat) |
| `Llama-3.1-Nemotron-70B-Instruct.json` | EXL2 measurement file for Llama-3.1-Nemotron-70B-Instruct | [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) |
| `Qwen2.5-32B-Instruct.json` | EXL2 measurement file for Qwen2.5-32B-Instruct | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
| `Chronos-Platinum-72B.json` | EXL2 measurement file for Chronos-Platinum-72B | [ZeusLabs/Chronos-Platinum-72B](https://huggingface.co/ZeusLabs/Chronos-Platinum-72B) |
| `Evathene-v1.3.json` | EXL2 measurement file for Evathene-v1.3 | [sophosympatheia/Evathene-v1.3](https://huggingface.co/sophosympatheia/Evathene-v1.3) |
| `Llama-3.3-70B-Instruct.json` | EXL2 measurement file for Llama-3.3-70B-Instruct | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) |
| `EVA-LLaMA-3.33-70B-v0.0.json` | EXL2 measurement file for EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0 | [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0) |
| `L3.3-70B-Euryale-v2.3.json` | EXL2 measurement file for Sao10K/L3.3-70B-Euryale-v2.3 | [Sao10K/L3.3-70B-Euryale-v2.3](https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3) |
| `72B-Qwen2.5-Kunou-v1.json` | EXL2 measurement file for Sao10K/72B-Qwen2.5-Kunou-v1 | [Sao10K/72B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1) |
| `QVQ-72B-Preview.json` | EXL2 measurement file for Qwen/QVQ-72B-Preview | [Qwen/QVQ-72B-Preview](https://huggingface.co/Qwen/QVQ-72B-Preview) |
| `Anubis-70B-v1.json` | EXL2 measurement file for TheDrummer/Anubis-70B-v1 | [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1) |
| `Evayale-v1.0.json` | EXL2 measurement file for sophosympatheia/Evayale-v1.0 | [sophosympatheia/Evayale-v1.0](https://huggingface.co/sophosympatheia/Evayale-v1.0) |
| `L3.3-Damascus-R1.json` | EXL2 measurement file for Steelskull/L3.3-Damascus-R1 | [Steelskull/L3.3-Damascus-R1](https://huggingface.co/Steelskull/L3.3-Damascus-R1) |
| `QwQ-32B.json` | EXL2 measurement file for Qwen/QwQ-32B | [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
|
666tuna/HF3B_callme
|
666tuna
| 2025-03-06T03:20:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T03:20:47Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tona3738/results
|
tona3738
| 2025-03-06T03:20:15Z | 4 | 0 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"base_model:robzchhangte/MizBERT",
"base_model:finetune:robzchhangte/MizBERT",
"license:apache-2.0",
"region:us"
] | null | 2024-08-15T17:35:41Z |
---
license: apache-2.0
base_model: robzchhangte/MizBERT
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [robzchhangte/MizBERT](https://huggingface.co/robzchhangte/MizBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7248
- Accuracy: 0.5346
- F1: 0.5346
- Precision: 0.5346
- Recall: 0.5346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6747 | 0.0585 | 10 | 1.2733 | 0.5 | 0.5 | 0.5 | 0.5 |
| 1.2979 | 0.1170 | 20 | 1.1629 | 0.5219 | 0.5219 | 0.5219 | 0.5219 |
| 1.0906 | 0.1754 | 30 | 1.1048 | 0.5234 | 0.5234 | 0.5234 | 0.5234 |
| 0.9134 | 0.2339 | 40 | 0.8426 | 0.5109 | 0.5109 | 0.5109 | 0.5109 |
| 0.7985 | 0.2924 | 50 | 0.7739 | 0.525 | 0.525 | 0.525 | 0.525 |
| 0.7278 | 0.3509 | 60 | 0.7949 | 0.4969 | 0.4969 | 0.4969 | 0.4969 |
| 0.7522 | 0.4094 | 70 | 0.7225 | 0.525 | 0.525 | 0.525 | 0.525 |
| 0.7134 | 0.4678 | 80 | 0.7187 | 0.5109 | 0.5109 | 0.5109 | 0.5109 |
| 0.6897 | 0.5263 | 90 | 0.7682 | 0.4781 | 0.4781 | 0.4781 | 0.4781 |
| 0.7369 | 0.5848 | 100 | 0.7019 | 0.5078 | 0.5078 | 0.5078 | 0.5078 |
| 0.6917 | 0.6433 | 110 | 0.6980 | 0.5109 | 0.5109 | 0.5109 | 0.5109 |
| 0.698 | 0.7018 | 120 | 0.7038 | 0.5297 | 0.5297 | 0.5297 | 0.5297 |
| 0.6974 | 0.7602 | 130 | 0.7039 | 0.5125 | 0.5125 | 0.5125 | 0.5125 |
| 0.7141 | 0.8187 | 140 | 0.6941 | 0.5047 | 0.5047 | 0.5047 | 0.5047 |
| 0.7127 | 0.8772 | 150 | 0.6937 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.7007 | 0.9357 | 160 | 0.7047 | 0.5266 | 0.5266 | 0.5266 | 0.5266 |
| 0.7483 | 0.9942 | 170 | 0.6975 | 0.4828 | 0.4828 | 0.4828 | 0.4828 |
| 0.7063 | 1.0526 | 180 | 0.6929 | 0.5266 | 0.5266 | 0.5266 | 0.5266 |
| 0.6848 | 1.1111 | 190 | 0.7107 | 0.4797 | 0.4797 | 0.4797 | 0.4797 |
| 0.7014 | 1.1696 | 200 | 0.6891 | 0.5422 | 0.5422 | 0.5422 | 0.5422 |
| 0.7113 | 1.2281 | 210 | 0.6950 | 0.5141 | 0.5141 | 0.5141 | 0.5141 |
| 0.6915 | 1.2865 | 220 | 0.6901 | 0.5391 | 0.5391 | 0.5391 | 0.5391 |
| 0.6834 | 1.3450 | 230 | 0.7117 | 0.5188 | 0.5188 | 0.5188 | 0.5188 |
| 0.7032 | 1.4035 | 240 | 0.7029 | 0.5031 | 0.5031 | 0.5031 | 0.5031 |
| 0.6962 | 1.4620 | 250 | 0.6952 | 0.5312 | 0.5312 | 0.5312 | 0.5312 |
| 0.7103 | 1.5205 | 260 | 0.7165 | 0.5297 | 0.5297 | 0.5297 | 0.5297 |
| 0.7405 | 1.5789 | 270 | 0.8608 | 0.475 | 0.4750 | 0.475 | 0.475 |
| 0.7633 | 1.6374 | 280 | 0.6994 | 0.5344 | 0.5344 | 0.5344 | 0.5344 |
| 0.7061 | 1.6959 | 290 | 0.6887 | 0.5531 | 0.5531 | 0.5531 | 0.5531 |
| 0.6975 | 1.7544 | 300 | 0.7105 | 0.475 | 0.4750 | 0.475 | 0.475 |
| 0.7098 | 1.8129 | 310 | 0.6959 | 0.5297 | 0.5297 | 0.5297 | 0.5297 |
| 0.7703 | 1.8713 | 320 | 0.6954 | 0.5281 | 0.5281 | 0.5281 | 0.5281 |
| 0.6948 | 1.9298 | 330 | 0.7116 | 0.475 | 0.4750 | 0.475 | 0.475 |
| 0.689 | 1.9883 | 340 | 0.7261 | 0.475 | 0.4750 | 0.475 | 0.475 |
| 0.7011 | 2.0468 | 350 | 0.7265 | 0.5234 | 0.5234 | 0.5234 | 0.5234 |
| 0.7026 | 2.1053 | 360 | 0.7217 | 0.4734 | 0.4734 | 0.4734 | 0.4734 |
| 0.6837 | 2.1637 | 370 | 0.7001 | 0.4984 | 0.4984 | 0.4984 | 0.4984 |
| 0.6579 | 2.2222 | 380 | 0.7106 | 0.525 | 0.525 | 0.525 | 0.525 |
| 0.6755 | 2.2807 | 390 | 0.7218 | 0.525 | 0.525 | 0.525 | 0.525 |
| 0.6739 | 2.3392 | 400 | 0.7054 | 0.5172 | 0.5172 | 0.5172 | 0.5172 |
| 0.6757 | 2.3977 | 410 | 0.7015 | 0.5406 | 0.5406 | 0.5406 | 0.5406 |
| 0.7135 | 2.4561 | 420 | 0.7396 | 0.4828 | 0.4828 | 0.4828 | 0.4828 |
| 0.6801 | 2.5146 | 430 | 0.7323 | 0.4906 | 0.4906 | 0.4906 | 0.4906 |
| 0.7349 | 2.5731 | 440 | 0.6939 | 0.5047 | 0.5047 | 0.5047 | 0.5047 |
| 0.6813 | 2.6316 | 450 | 0.6957 | 0.5234 | 0.5234 | 0.5234 | 0.5234 |
| 0.7054 | 2.6901 | 460 | 0.7156 | 0.5344 | 0.5344 | 0.5344 | 0.5344 |
| 0.7052 | 2.7485 | 470 | 0.7143 | 0.5437 | 0.5437 | 0.5437 | 0.5437 |
| 0.6915 | 2.8070 | 480 | 0.6947 | 0.5062 | 0.5062 | 0.5062 | 0.5062 |
| 0.679 | 2.8655 | 490 | 0.7109 | 0.5312 | 0.5312 | 0.5312 | 0.5312 |
| 0.6729 | 2.9240 | 500 | 0.7442 | 0.4938 | 0.4938 | 0.4938 | 0.4938 |
| 0.7035 | 2.9825 | 510 | 0.7041 | 0.5281 | 0.5281 | 0.5281 | 0.5281 |
| 0.7069 | 3.0409 | 520 | 0.7023 | 0.4766 | 0.4766 | 0.4766 | 0.4766 |
| 0.7089 | 3.0994 | 530 | 0.6936 | 0.5359 | 0.5359 | 0.5359 | 0.5359 |
| 0.6675 | 3.1579 | 540 | 0.6931 | 0.5188 | 0.5188 | 0.5188 | 0.5188 |
| 0.6202 | 3.2164 | 550 | 0.8091 | 0.4703 | 0.4703 | 0.4703 | 0.4703 |
| 0.6183 | 3.2749 | 560 | 0.7316 | 0.5406 | 0.5406 | 0.5406 | 0.5406 |
| 0.5781 | 3.3333 | 570 | 0.7620 | 0.5437 | 0.5437 | 0.5437 | 0.5437 |
| 0.6383 | 3.3918 | 580 | 0.7552 | 0.5219 | 0.5219 | 0.5219 | 0.5219 |
| 0.628 | 3.4503 | 590 | 0.7266 | 0.5437 | 0.5437 | 0.5437 | 0.5437 |
| 0.6198 | 3.5088 | 600 | 0.7217 | 0.5672 | 0.5672 | 0.5672 | 0.5672 |
| 0.6572 | 3.5673 | 610 | 0.7962 | 0.5047 | 0.5047 | 0.5047 | 0.5047 |
| 0.6119 | 3.6257 | 620 | 0.7258 | 0.5563 | 0.5563 | 0.5563 | 0.5563 |
| 0.6651 | 3.6842 | 630 | 0.7445 | 0.55 | 0.55 | 0.55 | 0.55 |
| 0.5399 | 3.7427 | 640 | 0.8115 | 0.5062 | 0.5062 | 0.5062 | 0.5062 |
| 0.6291 | 3.8012 | 650 | 0.8045 | 0.5312 | 0.5312 | 0.5312 | 0.5312 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Tokenizers 0.19.1
|
ontocord/wide_3b_sft_stage1.1-ss1-no_redteam_skg_poem.no_issue
|
ontocord
| 2025-03-06T03:19:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T03:16:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jssky/6cab1b5d-6330-48b7-a862-6acf6823a7c7
|
jssky
| 2025-03-06T03:19:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"region:us"
] | null | 2025-03-06T01:33:46Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6cab1b5d-6330-48b7-a862-6acf6823a7c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8705748c33de40f1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8705748c33de40f1_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 200
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: jssky/6cab1b5d-6330-48b7-a862-6acf6823a7c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1000
micro_batch_size: 8
mlflow_experiment_name: /tmp/8705748c33de40f1_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 200
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 6ea17b8e-18a1-49d3-aa27-fab88b496936
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6ea17b8e-18a1-49d3-aa27-fab88b496936
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6cab1b5d-6330-48b7-a862-6acf6823a7c7
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.2054 | 200 | 0.0026 |
| 0.0 | 0.4108 | 400 | 0.0025 |
| 0.0 | 0.6162 | 600 | 0.0001 |
| 0.0046 | 0.8216 | 800 | 0.0015 |
| 0.0 | 1.0270 | 1000 | 0.0008 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
melpiece/HF3B_callme
|
melpiece
| 2025-03-06T03:18:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T03:17:57Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jonjew/AlysonHannigan
|
Jonjew
| 2025-03-06T03:17:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-06T03:17:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 4LYS0N
output:
url: images/ComfyUI_00016_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 4LYS0N
license: unknown
---
# Alyson Hannigan - Actress
<Gallery />
## Model description
FROM https://civitai.com/models/1180460/alyson-hannigan-actress?modelVersionId=1328416
Trigger 4LYS0N
Strength 1.2
Alyson Hannigan Denisof (@alysonhannigan)
Alyson Hannigan, born on March 24, 1974, in Washington, D.C., is an American actress and television presenter best known for her iconic roles in Buffy the Vampire Slayer (as Willow Rosenberg) and How I Met Your Mother (as Lily Aldrin). She also gained fame in the American Pie film series as Michelle Flaherty. Alyson began her acting career in commercials as a child and transitioned to television and film roles in her teens, with her first major film appearance in My Stepmother Is an Alien (1988).
Standing 5'4" (164 cm) tall and weighing approximately 132 lbs (60 kg), Alyson has hazel eyes, brown hair, and an approachable charm that enhances her performances. She is of Irish and Jewish descent. Alyson graduated from North Hollywood High School and later attended California State University, Northridge.
In addition to acting, she has worked as a television host, presenting the show Penn & Teller: Fool Us. She married her Buffy the Vampire Slayer co-star Alexis Denisof in 2003, and the couple has two daughters, Satyana and Keeva. Alyson remains a beloved figure in the entertainment industry for her comedic timing and endearing portrayals.
Recommended Strength is 1.0 to 1.2 (1.2 having better results):
## Trigger words
You should use `4LYS0N` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AlysonHannigan/tree/main) them in the Files & versions tab.
|
lqishuo/ppo-LunarLander-v2
|
lqishuo
| 2025-03-06T03:17:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-06T03:17:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.90 +/- 19.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Nerva1228/niuma1
|
Nerva1228
| 2025-03-06T03:16:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T03:16:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: niuma1
---
# Niuma1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `niuma1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/niuma1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Abheben/bert-finetuned-ner
|
Abheben
| 2025-03-06T03:14:27Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-03-06T03:06:52Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9330246403175129
- name: Recall
type: recall
value: 0.9495119488387749
- name: F1
type: f1
value: 0.94119609642172
- name: Accuracy
type: accuracy
value: 0.9861511744275033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Precision: 0.9330
- Recall: 0.9495
- F1: 0.9412
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0732 | 1.0 | 1756 | 0.0699 | 0.9086 | 0.9355 | 0.9219 | 0.9818 |
| 0.0353 | 2.0 | 3512 | 0.0654 | 0.9363 | 0.9492 | 0.9427 | 0.9861 |
| 0.0222 | 3.0 | 5268 | 0.0628 | 0.9330 | 0.9495 | 0.9412 | 0.9862 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.3.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
marcelovidigal/ModernBERT-base-2-contract-sections-classification-v4-10-max
|
marcelovidigal
| 2025-03-06T03:14:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-04T19:42:17Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
model-index:
- name: ModernBERT-base-2-contract-sections-classification-v4-10-max
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/mvgdr/classificacao-secoes-contratos-v4-modernbert-base/runs/20cn4u4r)
# ModernBERT-base-2-contract-sections-classification-v4-10-max
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4801
- Accuracy Evaluate: 0.917
- Precision Evaluate: 0.9221
- Recall Evaluate: 0.9209
- F1 Evaluate: 0.9210
- Accuracy Sklearn: 0.917
- Precision Sklearn: 0.9173
- Recall Sklearn: 0.917
- F1 Sklearn: 0.9166
- Acuracia Rotulo Objeto: 0.9649
- Acuracia Rotulo Obrigacoes: 0.8838
- Acuracia Rotulo Valor: 0.8166
- Acuracia Rotulo Vigencia: 0.9816
- Acuracia Rotulo Rescisao: 0.9474
- Acuracia Rotulo Foro: 0.9385
- Acuracia Rotulo Reajuste: 0.9039
- Acuracia Rotulo Fiscalizacao: 0.8423
- Acuracia Rotulo Publicacao: 0.9951
- Acuracia Rotulo Pagamento: 0.8913
- Acuracia Rotulo Casos Omissos: 0.9015
- Acuracia Rotulo Sancoes: 0.9266
- Acuracia Rotulo Dotacao Orcamentaria: 0.9780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Evaluate | Precision Evaluate | Recall Evaluate | F1 Evaluate | Accuracy Sklearn | Precision Sklearn | Recall Sklearn | F1 Sklearn | Acuracia Rotulo Objeto | Acuracia Rotulo Obrigacoes | Acuracia Rotulo Valor | Acuracia Rotulo Vigencia | Acuracia Rotulo Rescisao | Acuracia Rotulo Foro | Acuracia Rotulo Reajuste | Acuracia Rotulo Fiscalizacao | Acuracia Rotulo Publicacao | Acuracia Rotulo Pagamento | Acuracia Rotulo Casos Omissos | Acuracia Rotulo Sancoes | Acuracia Rotulo Dotacao Orcamentaria |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:------------------:|:---------------:|:-----------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------------:|:--------------------------:|:---------------------:|:------------------------:|:------------------------:|:--------------------:|:------------------------:|:----------------------------:|:--------------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------------------------:|
| 2.3971 | 1.0 | 2000 | 0.8286 | 0.7678 | 0.8292 | 0.7831 | 0.7914 | 0.7678 | 0.8077 | 0.7678 | 0.7695 | 0.9504 | 0.6734 | 0.5244 | 0.6115 | 0.8975 | 0.8808 | 0.6512 | 0.7382 | 0.9113 | 0.7754 | 0.8227 | 0.7982 | 0.9451 |
| 1.3917 | 2.0 | 4000 | 0.6639 | 0.8558 | 0.8777 | 0.8708 | 0.8702 | 0.8558 | 0.8673 | 0.8558 | 0.8557 | 0.9587 | 0.7357 | 0.6619 | 0.8976 | 0.9030 | 0.9231 | 0.8719 | 0.7918 | 0.9655 | 0.8587 | 0.8867 | 0.8991 | 0.9670 |
| 0.887 | 3.0 | 6000 | 0.5830 | 0.874 | 0.8789 | 0.8885 | 0.8804 | 0.874 | 0.8812 | 0.874 | 0.8743 | 0.9442 | 0.7761 | 0.7421 | 0.9081 | 0.8615 | 0.9346 | 0.9004 | 0.8265 | 0.9852 | 0.8841 | 0.9015 | 0.9083 | 0.9780 |
| 0.9516 | 4.0 | 8000 | 0.5632 | 0.8885 | 0.8987 | 0.8992 | 0.8979 | 0.8885 | 0.8899 | 0.8885 | 0.8881 | 0.9194 | 0.8081 | 0.7736 | 0.9554 | 0.9474 | 0.9385 | 0.8719 | 0.8297 | 0.9803 | 0.8877 | 0.8818 | 0.9174 | 0.9780 |
| 0.7614 | 5.0 | 10000 | 0.5290 | 0.8998 | 0.9083 | 0.9102 | 0.9084 | 0.8998 | 0.9021 | 0.8998 | 0.8998 | 0.9628 | 0.8064 | 0.8052 | 0.9370 | 0.9418 | 0.9846 | 0.8897 | 0.8328 | 0.9951 | 0.8804 | 0.9015 | 0.9174 | 0.9780 |
| 0.6291 | 6.0 | 12000 | 0.5590 | 0.8978 | 0.9029 | 0.9089 | 0.9044 | 0.8978 | 0.9009 | 0.8978 | 0.8975 | 0.9421 | 0.7946 | 0.7880 | 0.9764 | 0.9391 | 0.9385 | 0.9075 | 0.8454 | 0.9951 | 0.8913 | 0.9064 | 0.9083 | 0.9835 |
| 0.4869 | 7.0 | 14000 | 0.4875 | 0.9103 | 0.9140 | 0.9156 | 0.9143 | 0.9103 | 0.9102 | 0.9103 | 0.9096 | 0.9587 | 0.8737 | 0.7966 | 0.9790 | 0.9446 | 0.9346 | 0.9039 | 0.8139 | 0.9951 | 0.8913 | 0.9064 | 0.9266 | 0.9780 |
| 0.5691 | 8.0 | 16000 | 0.4929 | 0.9083 | 0.9119 | 0.9162 | 0.9134 | 0.9083 | 0.9093 | 0.9083 | 0.9079 | 0.9607 | 0.8232 | 0.8195 | 0.9843 | 0.9446 | 0.9385 | 0.8932 | 0.8580 | 0.9951 | 0.8913 | 0.9064 | 0.9174 | 0.9780 |
| 0.3831 | 9.0 | 18000 | 0.4892 | 0.9153 | 0.9184 | 0.9198 | 0.9186 | 0.9153 | 0.9157 | 0.9153 | 0.9149 | 0.9649 | 0.8737 | 0.8138 | 0.9843 | 0.9446 | 0.9385 | 0.8968 | 0.8486 | 0.9951 | 0.8913 | 0.9015 | 0.9266 | 0.9780 |
| 0.3 | 10.0 | 20000 | 0.4801 | 0.917 | 0.9221 | 0.9209 | 0.9210 | 0.917 | 0.9173 | 0.917 | 0.9166 | 0.9649 | 0.8838 | 0.8166 | 0.9816 | 0.9474 | 0.9385 | 0.9039 | 0.8423 | 0.9951 | 0.8913 | 0.9015 | 0.9266 | 0.9780 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
texanrangee/d0b020ae-3e28-45ee-9bf5-722674d0619c
|
texanrangee
| 2025-03-06T03:12:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T02:47:45Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ontocord/wide_3b-merge_test
|
ontocord
| 2025-03-06T03:12:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T03:03:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
texanrangee/9c2d671b-1514-4a41-8981-d2e05f2411e1
|
texanrangee
| 2025-03-06T03:10:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T00:42:17Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yenchik/mlx-gemma-2-2b-it-math
|
yenchik
| 2025-03-06T03:09:26Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"mlx",
"base_model:google/gemma-2-2b-it",
"base_model:quantized:google/gemma-2-2b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-03-02T12:54:45Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
- mlx
base_model: google/gemma-2-2b-it
---
# yenchik/mlx-gemma-2-2b-it-math
The Model [yenchik/mlx-gemma-2-2b-it-math](https://huggingface.co/yenchik/mlx-gemma-2-2b-it-math) was
converted to MLX format from [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it)
using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("yenchik/mlx-gemma-2-2b-it-math")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
tamtamx1332/AsenaMixTest
|
tamtamx1332
| 2025-03-06T03:09:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T03:08:09Z |
---
license: apache-2.0
---
|
whatsupmate/mizrahi3000_LoRA
|
whatsupmate
| 2025-03-06T03:07:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-03-06T03:05:30Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of ytzhakov person
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - whatsupmate/mizrahi3000_LoRA
<Gallery />
## Model description
These are whatsupmate/mizrahi3000_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of ytzhakov person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](whatsupmate/mizrahi3000_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
dskong07/charger-classif-model
|
dskong07
| 2025-03-06T03:02:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-03-06T02:48:07Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: charger-classif-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# charger-classif-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2678
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4057 | 0.0769 | 1 | 0.5508 | 0.6923 |
| 0.5194 | 0.1538 | 2 | 0.5735 | 0.6923 |
| 0.4141 | 0.2308 | 3 | 0.5007 | 0.7692 |
| 0.5442 | 0.3077 | 4 | 0.5160 | 0.8462 |
| 0.43 | 0.3846 | 5 | 0.5931 | 0.7692 |
| 0.4126 | 0.4615 | 6 | 0.5228 | 0.7692 |
| 0.4151 | 0.5385 | 7 | 0.5552 | 0.7692 |
| 0.3753 | 0.6154 | 8 | 0.5825 | 0.6154 |
| 0.3468 | 0.6923 | 9 | 0.5637 | 0.6923 |
| 0.3467 | 0.7692 | 10 | 0.5148 | 0.6923 |
| 0.5188 | 0.8462 | 11 | 0.4735 | 0.7692 |
| 0.4342 | 0.9231 | 12 | 0.5058 | 0.7692 |
| 0.3888 | 1.0 | 13 | 0.5176 | 0.6923 |
| 0.3977 | 1.0769 | 14 | 0.4865 | 0.7692 |
| 0.1799 | 1.1538 | 15 | 0.5299 | 0.6923 |
| 0.4628 | 1.2308 | 16 | 0.5614 | 0.6923 |
| 0.8787 | 1.3077 | 17 | 0.5826 | 0.6923 |
| 0.3396 | 1.3846 | 18 | 0.5337 | 0.7692 |
| 0.2144 | 1.4615 | 19 | 0.5531 | 0.6923 |
| 0.242 | 1.5385 | 20 | 0.5317 | 0.6923 |
| 1.1866 | 1.6154 | 21 | 0.5042 | 0.6923 |
| 0.2689 | 1.6923 | 22 | 0.4067 | 0.8462 |
| 0.3953 | 1.7692 | 23 | 0.4513 | 0.8462 |
| 0.1978 | 1.8462 | 24 | 0.5103 | 0.6923 |
| 0.3293 | 1.9231 | 25 | 0.4829 | 0.6923 |
| 0.3324 | 2.0 | 26 | 0.4915 | 0.8462 |
| 0.2096 | 2.0769 | 27 | 0.5136 | 0.8462 |
| 0.4142 | 2.1538 | 28 | 0.4490 | 0.7692 |
| 0.4267 | 2.2308 | 29 | 0.4697 | 0.7692 |
| 0.1871 | 2.3077 | 30 | 0.4744 | 0.7692 |
| 0.3145 | 2.3846 | 31 | 0.5596 | 0.6923 |
| 0.3417 | 2.4615 | 32 | 0.4589 | 0.6923 |
| 0.1548 | 2.5385 | 33 | 0.5245 | 0.6923 |
| 0.3131 | 2.6154 | 34 | 0.4507 | 0.6923 |
| 0.1974 | 2.6923 | 35 | 0.4068 | 0.8462 |
| 0.3148 | 2.7692 | 36 | 0.5019 | 0.6923 |
| 0.5036 | 2.8462 | 37 | 0.4761 | 0.6923 |
| 0.2178 | 2.9231 | 38 | 0.4132 | 0.9231 |
| 0.4536 | 3.0 | 39 | 0.4745 | 0.7692 |
| 0.3118 | 3.0769 | 40 | 0.4869 | 0.7692 |
| 0.3465 | 3.1538 | 41 | 0.4473 | 0.7692 |
| 0.096 | 3.2308 | 42 | 0.4376 | 0.8462 |
| 0.1726 | 3.3077 | 43 | 0.5971 | 0.7692 |
| 0.1685 | 3.3846 | 44 | 0.4768 | 0.7692 |
| 0.2046 | 3.4615 | 45 | 0.3595 | 0.8462 |
| 0.1297 | 3.5385 | 46 | 0.4701 | 0.7692 |
| 0.4597 | 3.6154 | 47 | 0.4054 | 0.7692 |
| 0.3474 | 3.6923 | 48 | 0.3927 | 0.8462 |
| 0.4476 | 3.7692 | 49 | 0.5063 | 0.8462 |
| 0.1062 | 3.8462 | 50 | 0.4741 | 0.7692 |
| 0.5484 | 3.9231 | 51 | 0.4950 | 0.6923 |
| 0.0945 | 4.0 | 52 | 0.4647 | 0.7692 |
| 0.1053 | 4.0769 | 53 | 0.3743 | 0.8462 |
| 0.4122 | 4.1538 | 54 | 0.4350 | 0.8462 |
| 0.2825 | 4.2308 | 55 | 0.4246 | 0.8462 |
| 0.2912 | 4.3077 | 56 | 0.5250 | 0.6923 |
| 0.3193 | 4.3846 | 57 | 0.3639 | 0.8462 |
| 0.066 | 4.4615 | 58 | 0.3574 | 0.9231 |
| 0.0888 | 4.5385 | 59 | 0.4897 | 0.6923 |
| 0.1046 | 4.6154 | 60 | 0.3032 | 0.9231 |
| 0.2573 | 4.6923 | 61 | 0.5662 | 0.6154 |
| 0.368 | 4.7692 | 62 | 0.3699 | 0.8462 |
| 0.1484 | 4.8462 | 63 | 0.3517 | 0.8462 |
| 0.1444 | 4.9231 | 64 | 0.2988 | 0.9231 |
| 0.1492 | 5.0 | 65 | 0.3523 | 0.8462 |
| 0.112 | 5.0769 | 66 | 0.4245 | 0.8462 |
| 0.0711 | 5.1538 | 67 | 0.4451 | 0.6923 |
| 0.2455 | 5.2308 | 68 | 0.4774 | 0.7692 |
| 0.3981 | 5.3077 | 69 | 0.5084 | 0.7692 |
| 0.1682 | 5.3846 | 70 | 0.4053 | 0.8462 |
| 0.2809 | 5.4615 | 71 | 0.4574 | 0.6923 |
| 0.1929 | 5.5385 | 72 | 0.3242 | 0.7692 |
| 0.161 | 5.6154 | 73 | 0.3854 | 0.7692 |
| 0.1475 | 5.6923 | 74 | 0.3935 | 0.7692 |
| 0.1058 | 5.7692 | 75 | 0.5751 | 0.6923 |
| 0.1103 | 5.8462 | 76 | 0.3874 | 0.8462 |
| 0.1057 | 5.9231 | 77 | 0.3984 | 0.7692 |
| 0.1593 | 6.0 | 78 | 0.3299 | 0.8462 |
| 0.1154 | 6.0769 | 79 | 0.4778 | 0.7692 |
| 0.3131 | 6.1538 | 80 | 0.4863 | 0.7692 |
| 0.0791 | 6.2308 | 81 | 0.4897 | 0.7692 |
| 0.0635 | 6.3077 | 82 | 0.5831 | 0.7692 |
| 0.0704 | 6.3846 | 83 | 0.4384 | 0.8462 |
| 0.0597 | 6.4615 | 84 | 0.5519 | 0.7692 |
| 0.1117 | 6.5385 | 85 | 0.4525 | 0.7692 |
| 0.1542 | 6.6154 | 86 | 0.5354 | 0.8462 |
| 0.5737 | 6.6923 | 87 | 0.5034 | 0.7692 |
| 0.4216 | 6.7692 | 88 | 0.4514 | 0.7692 |
| 0.3276 | 6.8462 | 89 | 0.5688 | 0.7692 |
| 0.119 | 6.9231 | 90 | 0.3433 | 0.9231 |
| 0.1519 | 7.0 | 91 | 0.4454 | 0.7692 |
| 0.1155 | 7.0769 | 92 | 0.3323 | 0.7692 |
| 0.1264 | 7.1538 | 93 | 0.4030 | 0.6923 |
| 0.0585 | 7.2308 | 94 | 0.3404 | 0.8462 |
| 0.1404 | 7.3077 | 95 | 0.3507 | 0.8462 |
| 0.0417 | 7.3846 | 96 | 0.4860 | 0.7692 |
| 0.0873 | 7.4615 | 97 | 0.4896 | 0.8462 |
| 0.0801 | 7.5385 | 98 | 0.4383 | 0.7692 |
| 0.2163 | 7.6154 | 99 | 0.3764 | 0.8462 |
| 0.1823 | 7.6923 | 100 | 0.4258 | 0.8462 |
| 0.1832 | 7.7692 | 101 | 0.2890 | 0.8462 |
| 0.0879 | 7.8462 | 102 | 0.2909 | 0.8462 |
| 0.2345 | 7.9231 | 103 | 0.3617 | 0.8462 |
| 0.1096 | 8.0 | 104 | 0.2678 | 0.9231 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cpu
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ben832/mflux
|
ben832
| 2025-03-06T03:00:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-03-05T21:40:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: flux
output:
url: images/download.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: mit
---
# mflux
<Gallery />
## Download model
Weights for this model are available in Safetensors,PyTorch format.
[Download](/ben832/mflux/tree/main) them in the Files & versions tab.
|
ClarenceDan/bd3d3649-4d88-4050-917a-8dfa58d476c0
|
ClarenceDan
| 2025-03-06T02:59:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T02:48:36Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bd3d3649-4d88-4050-917a-8dfa58d476c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 64ada857f8032e58_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/64ada857f8032e58_train_data.json
type:
field_input: description
field_instruction: input persona
field_output: synthesized text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/bd3d3649-4d88-4050-917a-8dfa58d476c0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/64ada857f8032e58_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d48c1d93-6c93-4c81-b9c1-af40841a4fb7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d48c1d93-6c93-4c81-b9c1-af40841a4fb7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bd3d3649-4d88-4050-917a-8dfa58d476c0
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0005 | 3 | nan |
| 0.0 | 0.0010 | 6 | nan |
| 0.0 | 0.0015 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Alphatao/test_5
|
Alphatao
| 2025-03-06T02:57:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T02:54:55Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: test_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: "Qwen/Qwen2-0.5B"
model_type: "AutoModelForCausalLM"
tokenizer_type: "AutoTokenizer"
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: "llama3"
datasets:
- path: "/workspace/input_data/train_data.json"
format: "custom"
type:
system_prompt: ""
system_format: "{system}"
field_instruction: "prompt"
field_output: "question"
no_input_format: "{instruction}"
format: "{instruction}"
ds_type: "json"
data_files:
- "train_data.json"
dataset_prepared_path: null
val_set_size: 0.04
output_dir: "miner_id_24"
sequence_len: 1024
sample_packing: false
pad_to_sequence_len: true
trust_remote_code: true
adapter: "lora"
lora_model_dir: null
lora_r: 64
lora_alpha: 128
lora_dropout: 0.3
lora_target_linear: true
lora_fan_in_fan_out: null
gradient_accumulation_steps: 6
micro_batch_size: 4
optimizer: "adamw_bnb_8bit"
lr_scheduler: "cosine"
learning_rate: 0.0002
num_epochs: 3
max_steps: 2
train_on_inputs: false
group_by_length: false
bf16: true
fp16: null
tf32: true
max_grad_norm: 1.0
gradient_checkpointing: true
early_stopping_patience: 4
save_steps: 100
eval_steps: 100
resume_from_checkpoint: null
local_rank: null
logging_steps: 1
xformers_attention: null
flash_attention: true
s2_attention: null
load_best_model_at_end: true
wandb_project: "Gradients-On-Demand"
wandb_entity: null
wandb_mode: "online"
wandb_run: "your_name"
wandb_runid: "c29f8be0-1d6a-40dd-83f1-f1d58697725a"
hub_model_id: "Alphatao/test_5"
hub_repo: null
hub_strategy: "end"
hub_token: null
warmup_steps: 10
eval_table_size: null
eval_max_new_tokens: 128
debug: null
deepspeed: null
weight_decay: 0.0
fsdp: null
fsdp_config: null
wandb_name: "c29f8be0-1d6a-40dd-83f1-f1d58697725a"
lora_target_modules: ["q_proj", "k_proj", "v_proj"]
mlflow_experiment_name: "/tmp/train_data.json"
```
</details><br>
# test_5
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.26 | 0.0006 | 1 | 0.4452 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AImused/omeg31
|
AImused
| 2025-03-06T02:56:17Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:49:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Leuven_7
|
KingEmpire
| 2025-03-06T02:55:19Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:37:23Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
triplee/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit_2025-03-05
|
triplee
| 2025-03-06T02:55:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:52:58Z |
---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** triplee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nomnoos37/250305-Mistral-Nemo-ggls-v1.3.9-1-epoch
|
nomnoos37
| 2025-03-06T02:55:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T02:50:57Z |
---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nomnoos37
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KingEmpire/Leuven_3
|
KingEmpire
| 2025-03-06T02:54:46Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:37:22Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Leuven_6
|
KingEmpire
| 2025-03-06T02:54:41Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:37:23Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Leuven_5
|
KingEmpire
| 2025-03-06T02:54:28Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:37:23Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Leuven_1
|
KingEmpire
| 2025-03-06T02:54:25Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:37:21Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Alphatao/test3
|
Alphatao
| 2025-03-06T02:53:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T01:54:20Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: "Qwen/Qwen2-0.5B"
model_type: "AutoModelForCausalLM"
tokenizer_type: "AutoTokenizer"
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: "llama3"
datasets:
- path: "/workspace/input_data/train_data.json"
format: "custom"
type:
system_prompt: ""
system_format: "{system}"
field_instruction: "prompt"
field_output: "question"
no_input_format: "{instruction}"
format: "{instruction}"
ds_type: "json"
data_files:
- "train_data.json"
dataset_prepared_path: null
val_set_size: 0.04
output_dir: "miner_id_24"
sequence_len: 1024
sample_packing: false
pad_to_sequence_len: true
trust_remote_code: true
adapter: "lora"
lora_model_dir: null
lora_r: 64
lora_alpha: 128
lora_dropout: 0.3
lora_target_linear: true
lora_fan_in_fan_out: null
gradient_accumulation_steps: 6
micro_batch_size: 4
optimizer: "adamw_bnb_8bit"
lr_scheduler: "cosine"
learning_rate: 0.0002
num_epochs: 3
max_steps: 2
train_on_inputs: false
group_by_length: false
bf16: true
fp16: null
tf32: true
max_grad_norm: 1.0
gradient_checkpointing: true
early_stopping_patience: 4
save_steps: 100
eval_steps: 100
resume_from_checkpoint: null
local_rank: null
logging_steps: 1
xformers_attention: null
flash_attention: true
s2_attention: null
load_best_model_at_end: true
wandb_project: "Gradients-On-Demand"
wandb_entity: null
wandb_mode: "online"
wandb_run: "your_name"
wandb_runid: "c29f8be0-1d6a-40dd-83f1-f1d58697725a"
hub_model_id: "Alphatao/test3"
hub_repo: null
hub_strategy: "end"
hub_token: null
warmup_steps: 10
eval_table_size: null
eval_max_new_tokens: 128
debug: null
deepspeed: null
weight_decay: 0.0
fsdp: null
fsdp_config: null
wandb_name: "c29f8be0-1d6a-40dd-83f1-f1d58697725a"
lora_target_modules: ["q_proj", "k_proj", "v_proj"]
mlflow_experiment_name: "/tmp/train_data.json"
```
</details><br>
# test3
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.26 | 0.0006 | 1 | 0.4452 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
bonamt11/Llama-3.2-1B-Instruct-bnb-4bit-Classification-model
|
bonamt11
| 2025-03-06T02:52:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:50:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
texanrangee/dd7e0664-4477-421e-ae3e-507f51ed6fb8
|
texanrangee
| 2025-03-06T02:51:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T23:21:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kenken6696/Llama-3.2-1B_3_mix_position_famous_unrecognized
|
kenken6696
| 2025-03-06T02:47:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:45:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Luongdzung/bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-geo-olora
|
Luongdzung
| 2025-03-06T02:46:51Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Luongdzung/bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-olora-ALL-WEIGHT",
"base_model:adapter:Luongdzung/bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-olora-ALL-WEIGHT",
"region:us"
] | null | 2025-03-06T02:46:48Z |
---
library_name: peft
base_model: Luongdzung/bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-olora-ALL-WEIGHT
tags:
- generated_from_trainer
model-index:
- name: bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-geo-olora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-geo-olora
This model is a fine-tuned version of [Luongdzung/bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-olora-ALL-WEIGHT](https://huggingface.co/Luongdzung/bloomVN-0.5B-ppo-sft-order1-mat-phy-che-bio-lit-his-olora-ALL-WEIGHT) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ontocord/wide_3b_sft_stage1.2-ss1-expert_formatted_text
|
ontocord
| 2025-03-06T02:43:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T01:55:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SpongeEngine/BlackSheep-Qwen-14B-i1-GGUF
|
SpongeEngine
| 2025-03-06T02:39:07Z | 0 | 0 | null |
[
"gguf",
"SpongeQuant",
"i1-GGUF",
"en",
"base_model:TroyDoesAI/BlackSheep-Qwen-14B",
"base_model:quantized:TroyDoesAI/BlackSheep-Qwen-14B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-05T21:24:19Z |
---
base_model: TroyDoesAI/BlackSheep-Qwen-14B
language:
- en
license: mit
quantized_by: SpongeQuant
tags:
- SpongeQuant
- i1-GGUF
---
Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization.
<div style="display: flex; gap: 20px; align-items: center; margin-top:0;">
<a href="https://github.com/SpongeEngine/SpongeQuant">
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/github-button.png" width="173">
</a>
<a href="https://discord.gg/azNmr2Gdgy">
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/discord-button.png" width="173">
</a>
</div>
***
<figure>
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/094.png" alt="UN Building Night">
<figcaption>UN Building Night</figcaption>
</figure>
<figure>
<audio controls>
<source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/011.mp3" type="audio/mp3">
Your browser does not support the audio element.
</audio>
<figcaption>Chuck Berry – Johnny B. Goode (USA, 1958)</figcaption>
</figure>
***
### What is a GGUF?
GGUF is a file format used for running large language models (LLMs) on different types of computers. It supports both regular processors (CPUs) and graphics cards (GPUs), making it easier to run models across a wide range of hardware. Many LLMs require powerful and expensive GPUs, but GGUF improves compatibility and efficiency by optimizing how models are loaded and executed. If a GPU doesn't have enough memory, GGUF can offload parts of the model to the CPU, allowing it to run even when GPU resources are limited. GGUF is designed to work well with quantized models, which use less memory and run faster, making them ideal for lower-end hardware. However, it can also store full-precision models when needed. Thanks to these optimizations, GGUF allows LLMs to run efficiently on everything from high-end GPUs to laptops and even CPU-only systems.
### What is an i1-GGUF?
i1-GGUF is an enhanced type of GGUF model that uses imatrix quantization—a smarter way of reducing model size while preserving key details. Instead of shrinking everything equally, it analyzes the importance of different model components and keeps the most crucial parts more accurate. Like standard GGUF, i1-GGUF allows LLMs to run on various hardware, including CPUs and lower-end GPUs. However, because it prioritizes important weights, i1-GGUF models deliver better responses than traditional GGUF models while maintaining efficiency.
|
KettaP/fine-tuned-paligemma-model-for-detection
|
KettaP
| 2025-03-06T02:37:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-06T02:35:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BeebekBhz/en-ne
|
BeebekBhz
| 2025-03-06T02:34:10Z | 25 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-hi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-03-04T09:41:32Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hi
tags:
- generated_from_keras_callback
model-index:
- name: en-ne
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# en-ne
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3328
- Validation Loss: 1.5832
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
### BLEU Score
The model achieves a BLEU score of **14.83** on the `test` split of the `BeebekBhz/en-ne` dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6327 | 2.0059 | 0 |
| 1.9273 | 1.7883 | 1 |
| 1.6957 | 1.6886 | 2 |
| 1.5456 | 1.6348 | 3 |
| 1.4287 | 1.6065 | 4 |
| 1.3328 | 1.5832 | 5 |
### Framework versions
- Transformers 4.47.1
- TensorFlow 2.17.1
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Ayen0/stella1d3_3
|
Ayen0
| 2025-03-06T02:31:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-06T02:12:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Stella1D3_3
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ayen0/stella1d3_3', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
FriendliAI/internlm3-8b-instruct
|
FriendliAI
| 2025-03-06T02:30:35Z | 0 | 0 | null |
[
"safetensors",
"internlm3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2403.17297",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-03-06T02:30:34Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤗Demo](https://huggingface.co/spaces/internlm/internlm3-8b-instruct) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297)
</div>
<p align="center">
👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://github.com/InternLM/InternLM/assets/25839884/a6aad896-7232-4220-ac84-9e070c2633ce" target="_blank">WeChat</a>
</p>
## Introduction
InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning. This model has the following characteristics:
- **Enhanced performance at reduced cost**:
State-of-the-art performance on reasoning and knowledge-intensive tasks surpass models like Llama3.1-8B and Qwen2.5-7B. Remarkably, InternLM3 is trained on only 4 trillion high-quality tokens, saving more than 75% of the training cost compared to other LLMs of similar scale.
- **Deep thinking capability**:
InternLM3 supports both the deep thinking mode for solving complicated reasoning tasks via the long chain-of-thought and the normal response mode for fluent user interactions.
## InternLM3-8B-Instruct
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://rank.opencompass.org.cn) for more evaluation results.
| | Benchmark | InternLM3-8B-Instruct | Qwen2.5-7B-Instruct | Llama3.1-8B-Instruct | GPT-4o-mini(closed source) |
| ------------ | ------------------------------- | --------------------- | ------------------- | -------------------- | -------------------------- |
| General | CMMLU(0-shot) | **83.1** | 75.8 | 53.9 | 66.0 |
| | MMLU(0-shot) | 76.6 | **76.8** | 71.8 | 82.7 |
| | MMLU-Pro(0-shot) | **57.6** | 56.2 | 48.1 | 64.1 |
| Reasoning | GPQA-Diamond(0-shot) | **37.4** | 33.3 | 24.2 | 42.9 |
| | DROP(0-shot) | **83.1** | 80.4 | 81.6 | 85.2 |
| | HellaSwag(10-shot) | **91.2** | 85.3 | 76.7 | 89.5 |
| | KOR-Bench(0-shot) | **56.4** | 44.6 | 47.7 | 58.2 |
| MATH | MATH-500(0-shot) | **83.0*** | 72.4 | 48.4 | 74.0 |
| | AIME2024(0-shot) | **20.0*** | 16.7 | 6.7 | 13.3 |
| Coding | LiveCodeBench(2407-2409 Pass@1) | **17.8** | 16.8 | 12.9 | 21.8 |
| | HumanEval(Pass@1) | 82.3 | **85.4** | 72.0 | 86.6 |
| Instrunction | IFEval(Prompt-Strict) | **79.3** | 71.7 | 75.2 | 79.7 |
| Long Context | RULER(4-128K Average) | 87.9 | 81.4 | **88.5** | 90.7 |
| Chat | AlpacaEval 2.0(LC WinRate) | **51.1** | 30.3 | 25.0 | 50.7 |
| | WildBench(Raw Score) | **33.1** | 23.3 | 1.5 | 40.3 |
| | MT-Bench-101(Score 1-10) | **8.59** | 8.49 | 8.37 | 8.87 |
- Values marked in bold indicate the **highest** in open source models
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means evaluating with Thinking Mode), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Requirements
```python
transformers >= 4.48
```
### Conversation Mode
#### Transformers inference
To load the InternLM3 8B Instruct model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Please tell me five scenic spots in Shanghai"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=1024, temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
#### LMDeploy inference
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
model_dir = "internlm/internlm3-8b-instruct"
pipe = lmdeploy.pipeline(model_dir)
response = pipe("Please tell me five scenic spots in Shanghai")
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm3-8b-instruct --model-name internlm3-8b-instruct --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm3-8b-instruct",
"messages": [
{"role": "user", "content": "Please tell me five scenic spots in Shanghai"}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
#### Ollama inference
First install ollama,
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/internlm3-8b-instruct
# install
pip install ollama
```
inference code,
```python
import ollama
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
#### vLLM inference
Refer to [installation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to install the latest code of vllm
```python
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
inference code:
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
prompts = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
### Thinking Mode
#### Thinking Demo
<img src="https://github.com/InternLM/InternLM/blob/017ba7446d20ecc3b9ab8e7b66cc034500868ab4/assets/solve_puzzle.png?raw=true" width="400"/>
#### Thinking system prompt
```python
thinking_system_prompt = """You are an expert mathematician with extensive experience in mathematical competitions. You approach problems through systematic thinking and rigorous reasoning. When solving problems, follow these thought processes:
## Deep Understanding
Take time to fully comprehend the problem before attempting a solution. Consider:
- What is the real question being asked?
- What are the given conditions and what do they tell us?
- Are there any special restrictions or assumptions?
- Which information is crucial and which is supplementary?
## Multi-angle Analysis
Before solving, conduct thorough analysis:
- What mathematical concepts and properties are involved?
- Can you recall similar classic problems or solution methods?
- Would diagrams or tables help visualize the problem?
- Are there special cases that need separate consideration?
## Systematic Thinking
Plan your solution path:
- Propose multiple possible approaches
- Analyze the feasibility and merits of each method
- Choose the most appropriate method and explain why
- Break complex problems into smaller, manageable steps
## Rigorous Proof
During the solution process:
- Provide solid justification for each step
- Include detailed proofs for key conclusions
- Pay attention to logical connections
- Be vigilant about potential oversights
## Repeated Verification
After completing your solution:
- Verify your results satisfy all conditions
- Check for overlooked special cases
- Consider if the solution can be optimized or simplified
- Review your reasoning process
Remember:
1. Take time to think thoroughly rather than rushing to an answer
2. Rigorously prove each key conclusion
3. Keep an open mind and try different approaches
4. Summarize valuable problem-solving methods
5. Maintain healthy skepticism and verify multiple times
Your response should reflect deep mathematical understanding and precise logical thinking, making your solution path and reasoning clear to others.
When you're ready, present your complete solution with:
- Clear problem understanding
- Detailed solution process
- Key insights
- Thorough verification
Focus on clear, logical progression of ideas and thorough explanation of your mathematical reasoning. Provide answers in the same language as the user asking the question, repeat the final answer using a '\\boxed{}' without any units, you have [[8192]] tokens to complete the answer.
"""
```
#### Transformers inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=8192)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
#### LMDeploy inference
LMDeploy is a toolkit for compressing, deploying, and serving LLM.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
from lmdeploy import pipeline, GenerationConfig, ChatTemplateConfig
model_dir = "internlm/internlm3-8b-instruct"
chat_template_config = ChatTemplateConfig(model_name='internlm3')
pipe = pipeline(model_dir, chat_template_config=chat_template_config)
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."},
]
response = pipe(messages, gen_config=GenerationConfig(max_new_tokens=2048))
print(response)
```
#### Ollama inference
First install ollama,
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/internlm3-8b-instruct
# install
pip install ollama
```
inference code,
```python
import ollama
messages = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
####
#### vLLM inference
Refer to [installation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) to install the latest code of vllm
```python
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
inference code
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8, max_tokens=8192)
prompts = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
## Open Source License
Code and model weights are licensed under Apache-2.0.
## Citation
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 简介
### InternLM3-8B-Instruct
InternLM3,即书生·浦语大模型第3代,开源了80亿参数,面向通用使用与高阶推理的指令模型(InternLM3-8B-Instruct)。模型具备以下特点:
- **更低的代价取得更高的性能**:
在推理、知识类任务上取得同量级最优性能,超过Llama3.1-8B和Qwen2.5-7B。值得关注的是InternLM3只用了4万亿词元进行训练,对比同级别模型训练成本节省75%以上。
- **深度思考能力**:
InternLM3支持通过长思维链求解复杂推理任务的深度思考模式,同时还兼顾了用户体验更流畅的通用回复模式。
#### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn)获取更多的评测结果。
| | 评测集\模型 | InternLM3-8B-Instruct | Qwen2.5-7B-Instruct | Llama3.1-8B-Instruct | GPT-4o-mini(闭源) |
| ------------ | ------------------------------- | --------------------- | ------------------- | -------------------- | ----------------- |
| General | CMMLU(0-shot) | **83.1** | 75.8 | 53.9 | 66.0 |
| | MMLU(0-shot) | 76.6 | **76.8** | 71.8 | 82.7 |
| | MMLU-Pro(0-shot) | **57.6** | 56.2 | 48.1 | 64.1 |
| Reasoning | GPQA-Diamond(0-shot) | **37.4** | 33.3 | 24.2 | 42.9 |
| | DROP(0-shot) | **83.1** | 80.4 | 81.6 | 85.2 |
| | HellaSwag(10-shot) | **91.2** | 85.3 | 76.7 | 89.5 |
| | KOR-Bench(0-shot) | **56.4** | 44.6 | 47.7 | 58.2 |
| MATH | MATH-500(0-shot) | **83.0*** | 72.4 | 48.4 | 74.0 |
| | AIME2024(0-shot) | **20.0*** | 16.7 | 6.7 | 13.3 |
| Coding | LiveCodeBench(2407-2409 Pass@1) | **17.8** | 16.8 | 12.9 | 21.8 |
| | HumanEval(Pass@1) | 82.3 | **85.4** | 72.0 | 86.6 |
| Instrunction | IFEval(Prompt-Strict) | **79.3** | 71.7 | 75.2 | 79.7 |
| LongContext | RULER(4-128K Average) | 87.9 | 81.4 | **88.5** | 90.7 |
| Chat | AlpacaEval 2.0(LC WinRate) | **51.1** | 30.3 | 25.0 | 50.7 |
| | WildBench(Raw Score) | **33.1** | 23.3 | 1.5 | 40.3 |
| | MT-Bench-101(Score 1-10) | **8.59** | 8.49 | 8.37 | 8.87 |
- 表中标粗的数值表示在对比的开源模型中的最高值。
- 以上评测结果基于 [OpenCompass](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表使用深度思考模式进行评测),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
#### 依赖
```python
transformers >= 4.48
```
#### 常规对话模式
##### Transformers 推理
通过以下的代码加载 InternLM3 8B Instruct 模型
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Please tell me five scenic spots in Shanghai"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=1024, temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
##### LMDeploy 推理
LMDeploy 是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
```bash
pip install lmdeploy
```
你可以使用以下 python 代码进行本地批量推理:
```python
import lmdeploy
model_dir = "internlm/internlm3-8b-instruct"
pipe = lmdeploy.pipeline(model_dir)
response = pipe(["Please tell me five scenic spots in Shanghai"])
print(response)
```
或者你可以使用以下命令启动兼容 OpenAI API 的服务:
```bash
lmdeploy serve api_server internlm/internlm3-8b-instruct --model-name internlm3-8b-instruct --server-port 23333
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm3-8b-instruct",
"messages": [
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
##### Ollama 推理
准备工作
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch 模型
ollama pull internlm/internlm3-8b-instruct
# install python库
pip install ollama
```
推理代码
```python
import ollama
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
####
##### vLLM 推理
参考[文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 安装 vllm 最新代码
```bash
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
推理代码
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8)
system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文."""
prompts = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": "Please tell me five scenic spots in Shanghai"
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
#### 深度思考模式
##### 深度思考 Demo
<img src="https://github.com/InternLM/InternLM/blob/017ba7446d20ecc3b9ab8e7b66cc034500868ab4/assets/solve_puzzle.png?raw=true" width="400"/>
##### 深度思考 system prompt
```python
thinking_system_prompt = """You are an expert mathematician with extensive experience in mathematical competitions. You approach problems through systematic thinking and rigorous reasoning. When solving problems, follow these thought processes:
## Deep Understanding
Take time to fully comprehend the problem before attempting a solution. Consider:
- What is the real question being asked?
- What are the given conditions and what do they tell us?
- Are there any special restrictions or assumptions?
- Which information is crucial and which is supplementary?
## Multi-angle Analysis
Before solving, conduct thorough analysis:
- What mathematical concepts and properties are involved?
- Can you recall similar classic problems or solution methods?
- Would diagrams or tables help visualize the problem?
- Are there special cases that need separate consideration?
## Systematic Thinking
Plan your solution path:
- Propose multiple possible approaches
- Analyze the feasibility and merits of each method
- Choose the most appropriate method and explain why
- Break complex problems into smaller, manageable steps
## Rigorous Proof
During the solution process:
- Provide solid justification for each step
- Include detailed proofs for key conclusions
- Pay attention to logical connections
- Be vigilant about potential oversights
## Repeated Verification
After completing your solution:
- Verify your results satisfy all conditions
- Check for overlooked special cases
- Consider if the solution can be optimized or simplified
- Review your reasoning process
Remember:
1. Take time to think thoroughly rather than rushing to an answer
2. Rigorously prove each key conclusion
3. Keep an open mind and try different approaches
4. Summarize valuable problem-solving methods
5. Maintain healthy skepticism and verify multiple times
Your response should reflect deep mathematical understanding and precise logical thinking, making your solution path and reasoning clear to others.
When you're ready, present your complete solution with:
- Clear problem understanding
- Detailed solution process
- Key insights
- Thorough verification
Focus on clear, logical progression of ideas and thorough explanation of your mathematical reasoning. Provide answers in the same language as the user asking the question, repeat the final answer using a '\\boxed{}' without any units, you have [[8192]] tokens to complete the answer.
"""
```
##### Transformers 推理
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_dir = "internlm/internlm3-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
# InternLM3 8B in 4bit will cost nearly 8GB GPU memory.
# pip install -U bitsandbytes
# 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
# 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "已知函数\(f(x)=\mathrm{e}^{x}-ax - a^{3}\)。\n(1)当\(a = 1\)时,求曲线\(y = f(x)\)在点\((1,f(1))\)处的切线方程;\n(2)若\(f(x)\)有极小值,且极小值小于\(0\),求\(a\)的取值范围。"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
generated_ids = model.generate(tokenized_chat, max_new_tokens=8192)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]
prompt = tokenizer.batch_decode(tokenized_chat)[0]
print(prompt)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
##### LMDeploy 推理
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
from lmdeploy import pipeline, GenerationConfig, ChatTemplateConfig
model_dir = "internlm/internlm3-8b-instruct"
chat_template_config = ChatTemplateConfig(model_name='internlm3')
pipe = pipeline(model_dir, chat_template_config=chat_template_config)
messages = [
{"role": "system", "content": thinking_system_prompt},
{"role": "user", "content": "已知函数\(f(x)=\mathrm{e}^{x}-ax - a^{3}\)。\n(1)当\(a = 1\)时,求曲线\(y = f(x)\)在点\((1,f(1))\)处的切线方程;\n(2)若\(f(x)\)有极小值,且极小值小于\(0\),求\(a\)的取值范围。"},
]
response = pipe(messages, gen_config=GenerationConfig(max_new_tokens=2048))
print(response)
```
##### Ollama 推理
准备工作
```python
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch 模型
ollama pull internlm/internlm3-8b-instruct
# install python库
pip install ollama
```
inference code,
```python
import ollama
messages = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\)."
},
]
stream = ollama.chat(
model='internlm/internlm3-8b-instruct',
messages=messages,
stream=True,
)
for chunk in stream:
print(chunk['message']['content'], end='', flush=True)
```
####
##### vLLM 推理
参考[文档](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) 安装 vllm 最新代码
```bash
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
```
推理代码
```python
from vllm import LLM, SamplingParams
llm = LLM(model="internlm/internlm3-8b-instruct")
sampling_params = SamplingParams(temperature=1, repetition_penalty=1.005, top_k=40, top_p=0.8, max_tokens=8192)
prompts = [
{
"role": "system",
"content": thinking_system_prompt,
},
{
"role": "user",
"content": "已知函数\(f(x)=\mathrm{e}^{x}-ax - a^{3}\)。\n(1)当\(a = 1\)时,求曲线\(y = f(x)\)在点\((1,f(1))\)处的切线方程;\n(2)若\(f(x)\)有极小值,且极小值小于\(0\),求\(a\)的取值范围。"
},
]
outputs = llm.chat(prompts,
sampling_params=sampling_params,
use_tqdm=False)
print(outputs)
```
## 开源许可证
本仓库的代码和权重依照 Apache-2.0 协议开源。
## 引用
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
TOMFORD79/Special_Titanium4
|
TOMFORD79
| 2025-03-06T02:28:59Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-06T02:15:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
taobao-mnn/QwQ-32B-MNN
|
taobao-mnn
| 2025-03-06T02:28:56Z | 0 | 0 | null |
[
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-03-06T02:08:02Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# QwQ-32B-MNN
## Introduction
This model is a 4-bit quantized version of the MNN model exported from QwQ-32B using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
## Download
```bash
# install huggingface
pip install huggingface
```
```bash
# shell download
huggingface download --model 'taobao-mnn/QwQ-32B-MNN' --local_dir 'path/to/dir'
```
```python
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/QwQ-32B-MNN')
```
```bash
# git clone
git clone https://www.modelscope.cn/taobao-mnn/QwQ-32B-MNN
```
## Usage
```bash
# clone MNN source
git clone https://github.com/alibaba/MNN.git
# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j
# run
./llm_demo /path/to/QwQ-32B-MNN/config.json prompt.txt
```
## Document
[MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
|
ClarenceDan/6cd884c7-ba6d-43ca-82c6-37de4ffcb04b
|
ClarenceDan
| 2025-03-06T02:28:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2025-03-06T01:53:01Z |
---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6cd884c7-ba6d-43ca-82c6-37de4ffcb04b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c85e29fecb2ff3d0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c85e29fecb2ff3d0_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/6cd884c7-ba6d-43ca-82c6-37de4ffcb04b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c85e29fecb2ff3d0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8cbffdb5-6a01-4537-a1fa-6ea0aeab36ad
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8cbffdb5-6a01-4537-a1fa-6ea0aeab36ad
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6cd884c7-ba6d-43ca-82c6-37de4ffcb04b
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0253 | 0.0002 | 1 | 0.9100 |
| 1.0145 | 0.0005 | 3 | 0.9087 |
| 0.9082 | 0.0010 | 6 | 0.8890 |
| 0.8546 | 0.0015 | 9 | 0.8164 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
casque/JasminePonyXL_character-10
|
casque
| 2025-03-06T02:27:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-03-06T02:26:27Z |
---
license: creativeml-openrail-m
---
|
peulsilva/qwen-0.5b-instruct-summary-pt-rank64
|
peulsilva
| 2025-03-06T02:26:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T02:25:38Z |
---
base_model: unsloth/qwen2-0.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** peulsilva
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/llama3-1_8b_gsmyrnis_test_dpo_data
|
mlfoundations-dev
| 2025-03-06T02:25:36Z | 40 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-04T01:33:47Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- trl
- dpo
- llama-factory
- generated_from_trainer
model-index:
- name: llama3-1_8b_gsmyrnis_test_dpo_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-1_8b_gsmyrnis_test_dpo_data
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/gsmyrnis_test_dpo_data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.