modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
edvenswa/ICD-COT-100-reasoning-Test-8-llama-2-batchsize2-8b | edvenswa | 2025-04-29T12:10:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T12:10:36Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** edvenswa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
samlucas/smolvlm_256m-parking_occupancy-PKLot-instruct-with-context-without-expert | samlucas | 2025-04-29T12:10:10Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:adapter:HuggingFaceTB/SmolVLM-256M-Instruct",
"region:us"
] | null | 2025-04-29T12:09:48Z | ---
base_model: HuggingFaceTB/SmolVLM-256M-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
team-9/gpt2-finetune-github-minhash-0.8-256-1M-data | team-9 | 2025-04-29T06:27:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T02:33:22Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetune-github-minhash-0.8-256-1M-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetune-github-minhash-0.8-256-1M-data
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.375 | 1.0 | 13311 | 1.3192 |
| 1.3242 | 2.0 | 26622 | 1.2652 |
| 1.3063 | 3.0 | 39933 | 1.2489 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
gornostay/noun-case-classifier | gornostay | 2025-04-29T06:26:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-29T02:30:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
soprasteria/models-KV | soprasteria | 2025-04-29T06:22:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2025-04-29T06:14:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kondara/Qwen3-14B-Q4_K_M-GGUF | Kondara | 2025-04-29T06:22:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T06:21:45Z | ---
base_model: Qwen/Qwen3-14B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Kondara/Qwen3-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-14B`](https://huggingface.co/Qwen/Qwen3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Kondara/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Kondara/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Kondara/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Kondara/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -c 2048
```
|
lan-xinh-y-u-06-link/VIRAL.Video.lan.xinh.y.u.06.link.lanhxinhyeu06.l.clip | lan-xinh-y-u-06-link | 2025-04-29T06:21:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T06:20:55Z | ---
license: apache-2.0
---
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธ ](https://ultra-bulletin.blogspot.com/p/ultra-bulletin-10.html)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธ ](https://ultra-bulletin.blogspot.com/p/ultra-bulletin-10.html)
**[WATCH NOW](https://ultra-bulletin.blogspot.com/p/ultra-bulletin-10.html)**
<a href="https://ultra-bulletin.blogspot.com/p/ultra-bulletin-10.html"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
AlexHung29629/mistral-small-if-rl-3000-0427 | AlexHung29629 | 2025-04-29T06:18:13Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-27T11:50:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sophierain-spider-vid-update/sophierain-spider-vid-update-latest-tutorial | Sophierain-spider-vid-update | 2025-04-29T06:16:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T06:15:51Z | <a href="https://viraltube4.blogspot.com/2024/12/sophie-rain-most-viral-videos-spiderman.html" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://viraltube4.blogspot.com/2024/12/sophie-rain-most-viral-videos-spiderman.html" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viraltube4.blogspot.com/2024/12/sophie-rain-most-viral-videos-spiderman.html" > <img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
03 seconds ago
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
next tutorial:
<h2>Here you can find Sophierain Spiderman Video tutorial:</h2>
in this tutorial i will tell you how to get you sophierain spiderman video tutorial 2025.
<br/>
<br/>
<center><a href="https://2cm.es/WoBp" style="color:blue;">๐ค Click Here to Watch ๐ค</a></center>
<br/>
<h2>How to find:</h2>
Go to google & search Sophie rain spiderman video tutorial or click on up button to watch tutorial
or go to youtube and search sophie rain video tutorial it is best way to find Sophierain Spiderman video tutorial 2025. |
mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF | mradermacher | 2025-04-29T06:15:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Alkahest-V8-LLaMa-70B",
"base_model:quantized:TareksTesting/Alkahest-V8-LLaMa-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T16:37:15Z | ---
base_model: TareksTesting/Alkahest-V8-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TareksTesting/Alkahest-V8-LLaMa-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V8-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V8-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kondara/Qwen3-8B-Q4_K_M-GGUF | Kondara | 2025-04-29T06:14:21Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T06:13:59Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Kondara/Qwen3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Kondara/Qwen3-8B-Q4_K_M-GGUF --hf-file qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Kondara/Qwen3-8B-Q4_K_M-GGUF --hf-file qwen3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Kondara/Qwen3-8B-Q4_K_M-GGUF --hf-file qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Kondara/Qwen3-8B-Q4_K_M-GGUF --hf-file qwen3-8b-q4_k_m.gguf -c 2048
```
|
harshbajpai/rf_small_model | harshbajpai | 2025-04-29T06:12:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T06:11:42Z | ---
license: apache-2.0
---
|
ks2019/text2sql-grpo-plan-v0 | ks2019 | 2025-04-29T06:08:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:Genies/text2sql-grpo-plan-v1",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T16:54:27Z | ---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
datasets: Genies/text2sql-grpo-plan-v1
library_name: transformers
model_name: text2sql-grpo-plan-v0
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for text2sql-grpo-plan-v0
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the [Genies/text2sql-grpo-plan-v1](https://huggingface.co/datasets/Genies/text2sql-grpo-plan-v1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ks2019/text2sql-grpo-plan-v0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/genies-rnd/text2sql-rl/runs/ho8xz741)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0
- Pytorch: 2.7.0a0+git6c0e746
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Siddharth63/Granite-2b-8bits-GPTQ | Siddharth63 | 2025-04-29T06:07:50Z | 0 | 0 | null | [
"safetensors",
"granite",
"license:apache-2.0",
"8-bit",
"gptq",
"region:us"
] | null | 2025-04-29T05:57:18Z | ---
license: apache-2.0
---
```
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model_path =
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "Where is the Thomas J. Watson Research Center located?"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens,
max_length=4000)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output)
``` |
vertings6/0eb72d32-0646-434a-a67f-190a186d364e | vertings6 | 2025-04-29T06:06:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T05:07:48Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0eb72d32-0646-434a-a67f-190a186d364e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 15e4ac28dd1a431f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/15e4ac28dd1a431f_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/0eb72d32-0646-434a-a67f-190a186d364e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/15e4ac28dd1a431f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a6aa65cf-8105-4646-8c45-fba4ce67e848
wandb_project: s56-32
wandb_run: your_name
wandb_runid: a6aa65cf-8105-4646-8c45-fba4ce67e848
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0eb72d32-0646-434a-a67f-190a186d364e
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5787 | 0.0063 | 200 | 1.7058 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
quanxuantruong/phobert-base-mrc-1k-v8 | quanxuantruong | 2025-04-29T06:03:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2025-04-29T05:59:49Z | ---
library_name: transformers
license: mit
base_model: vinai/phobert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: phobert-base-mrc-1k-v8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-base-mrc-1k-v8
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0195
- Accuracy: 0.6601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3837 | 1.0 | 67 | 1.3645 | 0.5621 |
| 1.2608 | 2.0 | 134 | 1.0846 | 0.6340 |
| 0.9927 | 3.0 | 201 | 1.0195 | 0.6601 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
LiliaBakh/alena_lora_1_april_2025 | LiliaBakh | 2025-04-29T06:01:35Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T05:46:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: alena
---
# Alena_Lora_1_April_2025
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `alena` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "alena",
"lora_weights": "https://huggingface.co/LiliaBakh/alena_lora_1_april_2025/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('LiliaBakh/alena_lora_1_april_2025', weight_name='lora.safetensors')
image = pipeline('alena').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/LiliaBakh/alena_lora_1_april_2025/discussions) to add images that show off what youโve made with this LoRA.
|
yujiepan/qwen3-moe-tiny-random | yujiepan | 2025-04-29T06:00:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T06:00:03Z | ---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B).
### Example usage:
```python
from transformers import pipeline
model_id = "yujiepan/qwen3-moe-tiny-random"
pipe = pipeline(
"text-generation", model=model_id, device="cuda",
trust_remote_code=True, max_new_tokens=3,
)
print(pipe("Hello World!"))
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
print(text)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=128
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
### Codes to create this repo:
```python
import torch
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
pipeline,
set_seed,
)
source_model_id = "Qwen/Qwen3-235B-A22B"
save_folder = "/tmp/yujiepan/qwen3-moe-tiny-random"
tokenizer = AutoTokenizer.from_pretrained(
source_model_id, trust_remote_code=True,
)
tokenizer.save_pretrained(save_folder)
config = AutoConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
config._name_or_path = source_model_id
config.hidden_size = 64
config.intermediate_size = 128
config.moe_intermediate_size = 128
config.head_dim = 32
config.decoder_sparse_step = 2 # layer0=mlp, layer1=moe
config.num_experts = 8
config.num_experts_per_tok = 2
config.num_key_value_heads = 1
config.num_attention_heads = 2
config.num_hidden_layers = 2
config.max_window_layers = 1
config.tie_word_embeddings = True
model = AutoModelForCausalLM.from_config(
config,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.5)
print(name, p.shape)
model.save_pretrained(save_folder)
``` |
mradermacher/Qwen2.5-Kunoulise-B-GGUF | mradermacher | 2025-04-29T06:00:07Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Sorawiz/Qwen2.5-Kunoulise-B",
"base_model:quantized:Sorawiz/Qwen2.5-Kunoulise-B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T15:51:23Z | ---
base_model: Sorawiz/Qwen2.5-Kunoulise-B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sorawiz/Qwen2.5-Kunoulise-B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Kunoulise-B-GGUF/resolve/main/Qwen2.5-Kunoulise-B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ail-sa/kaushal_test2 | ail-sa | 2025-04-29T05:55:29Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T05:12:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Kaushal_Test2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/kaushal_test2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/kaushal_test2', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/kaushal_test2/discussions) to add images that show off what youโve made with this LoRA.
|
XzWang/ruozhiChater-qwen2.5-14B | XzWang | 2025-04-29T05:54:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T05:45:27Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zack-Z/gemma3_27bi_cotsft_rs0_0_5cut_ru_gem3_e2 | Zack-Z | 2025-04-29T05:53:45Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-27b-it",
"base_model:finetune:unsloth/gemma-3-27b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T02:36:41Z | ---
base_model: unsloth/gemma-3-27b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pt-kunal-mishra/q-FrozenLake-v1-4x4-noSlippery | Pt-kunal-mishra | 2025-04-29T05:51:36Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-29T05:51:33Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Pt-kunal-mishra/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hypaai/wspr_wazobia_run2_04282025 | hypaai | 2025-04-29T05:43:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ig",
"yo",
"en",
"ha",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-29T00:32:18Z | ---
library_name: transformers
language:
- ig
- yo
- en
- ha
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: wspr_wazobia_run2_04282025
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wspr_wazobia_run2_04282025
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 7000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
ai4co/parco | ai4co | 2025-04-29T05:41:15Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-04-24T12:25:50Z | ---
license: mit
---
# PARCO Checkpoints
You may find instructions here: https://github.com/ai4co/parco |
miku552/Qwen3-8B-IQ4_NL-GGUF | miku552 | 2025-04-29T05:40:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-04-29T05:39:46Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# miku552/Qwen3-8B-IQ4_NL-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo miku552/Qwen3-8B-IQ4_NL-GGUF --hf-file qwen3-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo miku552/Qwen3-8B-IQ4_NL-GGUF --hf-file qwen3-8b-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo miku552/Qwen3-8B-IQ4_NL-GGUF --hf-file qwen3-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo miku552/Qwen3-8B-IQ4_NL-GGUF --hf-file qwen3-8b-iq4_nl-imat.gguf -c 2048
```
|
ThuraAung1601/speecht5_for_thai_tts_v1 | ThuraAung1601 | 2025-04-29T05:40:09Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"th",
"dataset:lunarlist/edited_common_voice",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-04-08T04:58:13Z | ---
library_name: transformers
language:
- th
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- lunarlist/edited_common_voice
model-index:
- name: SpeechT5-TTS-v1 for Thai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5-TTS-v1 for Thai
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Edited Thai Common Voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5847 | 0.9794 | 1000 | 0.5360 |
| 0.5592 | 1.9589 | 2000 | 0.5158 |
| 0.5469 | 2.9383 | 3000 | 0.5103 |
| 0.5479 | 3.9177 | 4000 | 0.5074 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf | RichardErkhov | 2025-04-29T05:39:59Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T04:05:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sailor-v2-8k5 - GGUF
- Model creator: https://huggingface.co/luffyevil114/
- Original model: https://huggingface.co/luffyevil114/sailor-v2-8k5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [sailor-v2-8k5.Q2_K.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q2_K.gguf) | Q2_K | 2.89GB |
| [sailor-v2-8k5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.IQ3_XS.gguf) | IQ3_XS | 3.18GB |
| [sailor-v2-8k5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.IQ3_S.gguf) | IQ3_S | 3.32GB |
| [sailor-v2-8k5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q3_K_S.gguf) | Q3_K_S | 3.32GB |
| [sailor-v2-8k5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.IQ3_M.gguf) | IQ3_M | 3.48GB |
| [sailor-v2-8k5.Q3_K.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q3_K.gguf) | Q3_K | 3.65GB |
| [sailor-v2-8k5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q3_K_M.gguf) | Q3_K_M | 3.65GB |
| [sailor-v2-8k5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q3_K_L.gguf) | Q3_K_L | 3.93GB |
| [sailor-v2-8k5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.IQ4_XS.gguf) | IQ4_XS | 4.02GB |
| [sailor-v2-8k5.Q4_0.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q4_0.gguf) | Q4_0 | 4.2GB |
| [sailor-v2-8k5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.IQ4_NL.gguf) | IQ4_NL | 4.22GB |
| [sailor-v2-8k5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q4_K_S.gguf) | Q4_K_S | 4.23GB |
| [sailor-v2-8k5.Q4_K.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q4_K.gguf) | Q4_K | 4.44GB |
| [sailor-v2-8k5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q4_K_M.gguf) | Q4_K_M | 4.44GB |
| [sailor-v2-8k5.Q4_1.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q4_1.gguf) | Q4_1 | 4.61GB |
| [sailor-v2-8k5.Q5_0.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q5_0.gguf) | Q5_0 | 5.03GB |
| [sailor-v2-8k5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q5_K_S.gguf) | Q5_K_S | 5.03GB |
| [sailor-v2-8k5.Q5_K.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q5_K.gguf) | Q5_K | 5.15GB |
| [sailor-v2-8k5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q5_K_M.gguf) | Q5_K_M | 5.15GB |
| [sailor-v2-8k5.Q5_1.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q5_1.gguf) | Q5_1 | 5.44GB |
| [sailor-v2-8k5.Q6_K.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q6_K.gguf) | Q6_K | 5.9GB |
| [sailor-v2-8k5.Q8_0.gguf](https://huggingface.co/RichardErkhov/luffyevil114_-_sailor-v2-8k5-gguf/blob/main/sailor-v2-8k5.Q8_0.gguf) | Q8_0 | 7.65GB |
Original model description:
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: hoangcung165/Sailor-7B-Metal-Healt
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- luffyevil114/psycho-data
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
engindemir/output | engindemir | 2025-04-29T05:38:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-29T05:38:13Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
littletuzi92/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_poisonous_wombat | littletuzi92 | 2025-04-29T05:26:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mute poisonous wombat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T08:45:43Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_poisonous_wombat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mute poisonous wombat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_poisonous_wombat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="littletuzi92/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mute_poisonous_wombat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dimensionismo/chatbot-quilatt | dimensionismo | 2025-04-29T05:25:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T05:25:32Z | ---
license: apache-2.0
---
|
luhaoran/Qwen2.5-7B-Stage2 | luhaoran | 2025-04-29T05:18:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T01:41:53Z | ---
library_name: transformers
model_name: Qwen2.5-7B-Stage2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Stage2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luhaoran/Qwen2.5-7B-Stage2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/haoranlu0730-ustc/huggingface/runs/mrvfheir)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yasu-oh/r1-1776-distill-llama-70b-GGUF | yasu-oh | 2025-04-29T05:17:24Z | 0 | 0 | null | [
"gguf",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:perplexity-ai/r1-1776-distill-llama-70b",
"base_model:quantized:perplexity-ai/r1-1776-distill-llama-70b",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T15:30:52Z | ---
license: mit
base_model:
- perplexity-ai/r1-1776-distill-llama-70b
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# r1-1776-distill-llama-70b-GGUF
base_model: [perplexity-ai/r1-1776-distill-llama-70b](https://huggingface.co/perplexity-ai/r1-1776-distill-llama-70b)
imatrix: [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)
|
Kenazin/Deepseek-Llama-8B-peft-p-tuning-v1-10 | Kenazin | 2025-04-29T05:16:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T05:16:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kenazin/Deepseek-Llama-8B-peft-p-tuning-v1-5 | Kenazin | 2025-04-29T05:14:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T05:14:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PANDATREE/flux-fill-clock-lora | PANDATREE | 2025-04-29T05:00:33Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T04:28:15Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A TOK clock
widget:
- text: A TOK clock
output:
url: image_0.png
- text: A TOK clock
output:
url: image_1.png
- text: A TOK clock
output:
url: image_2.png
- text: A TOK clock
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - PANDATREE/flux-fill-clock-lora
<Gallery />
## Model description
These are PANDATREE/flux-fill-clock-lora DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `A TOK clock` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](PANDATREE/flux-fill-clock-lora/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('PANDATREE/flux-fill-clock-lora', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A TOK clock').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mradermacher/jsl-glm-32b-GGUF | mradermacher | 2025-04-29T05:00:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Zaynoid/jsl-glm-32b",
"base_model:quantized:Zaynoid/jsl-glm-32b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T04:17:38Z | ---
base_model: Zaynoid/jsl-glm-32b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Zaynoid/jsl-glm-32b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.IQ4_XS.gguf) | IQ4_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q4_K_S.gguf) | Q4_K_S | 18.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q4_K_M.gguf) | Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q5_K_S.gguf) | Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q5_K_M.gguf) | Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/jsl-glm-32b-GGUF/resolve/main/jsl-glm-32b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TOMFORD79/Camp10 | TOMFORD79 | 2025-04-29T04:54:45Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T04:28:55Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Camp8 | TOMFORD79 | 2025-04-29T04:53:53Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T04:28:42Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ail-sa/kaushal_extrafolder_test | ail-sa | 2025-04-29T04:45:06Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T04:14:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Kaushal_Extrafolder_Test
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/ail-sa/kaushal_extrafolder_test/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/kaushal_extrafolder_test', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/kaushal_extrafolder_test/discussions) to add images that show off what youโve made with this LoRA.
|
hMnvvqyLmj/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_scampering_warthog | hMnvvqyLmj | 2025-04-29T04:42:49Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tall scampering warthog",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T08:42:57Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_scampering_warthog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tall scampering warthog
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_scampering_warthog
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hMnvvqyLmj/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tall_scampering_warthog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
alinerodrigues/wav2vec2-large-xlsr-grosman-words-aug-exp-1 | alinerodrigues | 2025-04-29T04:41:08Z | 0 | 0 | null | [
"pytorch",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T00:03:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-grosman-words-aug-exp-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-grosman-words-aug-exp-1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 18.4771
- Wer: 1.1623
- Cer: 0.7129
- Per: 1.1604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Per |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 68.8158 | 1.0 | 63 | 22.5423 | 1.0009 | 0.9027 | 1.0009 |
| 7.6269 | 2.0 | 126 | 24.5371 | 1.0019 | 0.8998 | 1.0019 |
| 7.6269 | 2.99 | 189 | 24.4193 | 1.0038 | 0.8711 | 1.0038 |
| 3.9053 | 3.99 | 252 | 26.7040 | 1.0245 | 0.7978 | 1.0245 |
| 3.6503 | 4.99 | 315 | 23.9066 | 1.0557 | 0.7392 | 1.0557 |
| 3.6503 | 5.99 | 378 | 29.6456 | 1.0434 | 0.6889 | 1.0434 |
| 2.764 | 6.99 | 441 | 29.3321 | 1.0906 | 0.6490 | 1.0906 |
| 2.4595 | 8.0 | 505 | 18.4771 | 1.1623 | 0.7129 | 1.1604 |
| 2.4595 | 9.0 | 568 | 29.4736 | 1.0387 | 0.6209 | 1.0387 |
| 2.2372 | 10.0 | 631 | 28.2509 | 1.0226 | 0.5924 | 1.0226 |
| 2.2372 | 10.99 | 694 | 27.7802 | 0.9792 | 0.5773 | 0.9774 |
| 1.97 | 11.99 | 757 | 28.9601 | 0.9783 | 0.5374 | 0.9764 |
| 1.7414 | 12.99 | 820 | 28.2486 | 0.9623 | 0.5221 | 0.9604 |
| 1.7414 | 13.99 | 883 | 26.1469 | 0.9415 | 0.5558 | 0.9406 |
| 1.6401 | 14.99 | 946 | 29.1386 | 0.8906 | 0.4841 | 0.8887 |
| 1.4366 | 16.0 | 1010 | 29.2485 | 0.8519 | 0.4619 | 0.85 |
| 1.4366 | 17.0 | 1073 | 31.7118 | 0.8330 | 0.4330 | 0.8292 |
| 1.3404 | 18.0 | 1136 | 30.9065 | 0.7755 | 0.4230 | 0.7717 |
| 1.3404 | 18.99 | 1199 | 31.0650 | 0.7802 | 0.4073 | 0.7736 |
| 1.1973 | 19.99 | 1262 | 31.2787 | 0.8 | 0.4045 | 0.7962 |
| 1.1184 | 20.99 | 1325 | 30.3397 | 0.7877 | 0.4192 | 0.7830 |
| 1.1184 | 21.99 | 1388 | 30.4381 | 0.7557 | 0.3924 | 0.7519 |
| 1.0302 | 22.99 | 1451 | 30.7764 | 0.7575 | 0.3880 | 0.7547 |
| 0.9575 | 24.0 | 1515 | 30.1089 | 0.7274 | 0.3821 | 0.7226 |
| 0.9575 | 25.0 | 1578 | 29.0145 | 0.7057 | 0.3774 | 0.7019 |
| 0.8595 | 26.0 | 1641 | 32.1018 | 0.7226 | 0.3760 | 0.7179 |
| 0.7968 | 26.99 | 1704 | 29.5336 | 0.7104 | 0.3643 | 0.7075 |
| 0.7968 | 27.99 | 1767 | 32.2412 | 0.7198 | 0.3726 | 0.7179 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.13.3
|
TOMFORD79/Camp6 | TOMFORD79 | 2025-04-29T04:39:17Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T04:28:30Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
roadus/Foundation-Sec-8B-Q8_0-GGUF | roadus | 2025-04-29T04:38:35Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"security",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:fdtn-ai/Foundation-Sec-8B",
"base_model:quantized:fdtn-ai/Foundation-Sec-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T04:37:54Z | ---
base_model: fdtn-ai/Foundation-Sec-8B
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- security
- llama-cpp
- gguf-my-repo
---
# roadus/Foundation-Sec-8B-Q8_0-GGUF
This model was converted to GGUF format from [`fdtn-ai/Foundation-Sec-8B`](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo roadus/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo roadus/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo roadus/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo roadus/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -c 2048
```
|
fedovtt/e89fedea-11ba-4eb1-a925-3bf32cfcbe76 | fedovtt | 2025-04-29T04:38:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T03:34:16Z | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e89fedea-11ba-4eb1-a925-3bf32cfcbe76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0f595e9ff2bcd098_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0f595e9ff2bcd098_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/e89fedea-11ba-4eb1-a925-3bf32cfcbe76
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0f595e9ff2bcd098_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa0ca05d-e3de-4bfb-9606-737b2bd623fd
wandb_project: s56-1
wandb_run: your_name
wandb_runid: aa0ca05d-e3de-4bfb-9606-737b2bd623fd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e89fedea-11ba-4eb1-a925-3bf32cfcbe76
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4345 | 0.0068 | 200 | 0.4894 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xiaoyuanliu/Qwen2.5-3B-simplerl-ppo-online.critique-012-ver.len-p3 | xiaoyuanliu | 2025-04-29T04:31:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T04:26:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mhr2004/nev-original-cross-encoder-stsb-roberta-large-bs8-lr2e-05 | mhr2004 | 2025-04-29T04:25:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T03:59:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vg126/VGarut | vg126 | 2025-04-29T04:25:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T04:12:22Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
base_model: Qwen/Qwen2.5-1.5B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
cherryDavid/Qwen3-0.6B-Q8_0-GGUF | cherryDavid | 2025-04-29T04:17:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T04:17:23Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# cherryDavid/Qwen3-0.6B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cherryDavid/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cherryDavid/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cherryDavid/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cherryDavid/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -c 2048
```
|
k1h0/llama3.1-8B-Instruct-query_ns | k1h0 | 2025-04-29T04:16:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"freeze",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T04:12:28Z | ---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: llama_ns
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_ns
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the codes_330k_nsx dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
AlanLanSS/mnem_qwen | AlanLanSS | 2025-04-29T04:15:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:20:10Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlanLanSS
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hafidhsoekma/unsloth-Qwen2.5-7B-Instruct-unsloth-bnb-16bit-gasing-0 | hafidhsoekma | 2025-04-29T04:14:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T01:55:41Z | ---
base_model: unsloth/Qwen2.5-7B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hafidhsoekma
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bnkc123/bge-base-financial-matryoshka | bnkc123 | 2025-04-29T04:13:33Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:philschmid/finanical-rag-embedding-dataset",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-29T03:04:16Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: What are the components of Comcast's domestic distribution revenue?
sentences:
- Cash used in investing activities was $2.3 billion for fiscal 2023, compared to
$2.1 billion for fiscal 2022.
- Domestic distribution revenue primarily includes revenue generated from the distribution
of our television networks operating predominantly in the United States to traditional
and virtual multichannel video providers, and from NBC-affiliated and Telemundo-affiliated
local broadcast television stations. Our revenue from distribution agreements
is generally based on the number of subscribers receiving the programming on our
television networks and a per subscriber fee. Distribution revenue also includes
Peacock subscription fees.
- In January 2023, Alphabet Inc. announced a reduction of its workforce, consequently
recording employee severance and related charges of $2.1 billion for the year.
- source_sentence: What was the noncash pre-tax impairment charge recorded due to
the disposal of Vrio's operations in 2021, and what are the main components contributing
to this amount?
sentences:
- The cash equities rate per contract (per 100 shares) for NYSE increased by 6%,
from $0.045 in 2022 to $0.048 in 2023.
- In the second quarter of 2021, we classified the Vrio disposal group as held-for-sale
and reported the disposal group at fair value less cost to sell, which resulted
in a noncash, pre-tax impairment charge of $4,555, including approximately $2,100
related to accumulated foreign currency translation adjustments and $2,500 related
to property, plant and equipment and intangible assets.
- 'SECRET LAIR - our internet-based storefront where MAGIC: THE GATHERING fans can
purchase exclusive and limited versions of cards.'
- source_sentence: What does the Corporate and Other segment include in its composition?
sentences:
- The segment consists of unallocated corporate expenses and administrative costs
and activities not considered when evaluating segment performance as well as certain
assets benefiting more than one segment. In addition, intersegment transactions
are eliminated within the Corporate and Other segment.
- Net cash provided by (used in) operating activities was recorded at $20,930 million
for the reported year.
- Forward-Looking Statements Certain statements in this report, other than purely
historical information, including estimates, projections, statements relating
to our business plans, objectives and expected operating results, and the assumptions
upon which those statements are based, are โforward-looking statementsโ within
the meaning of the Private Securities Litigation Reform Act of 1995, Section 27A
of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of
1934.
- source_sentence: What was the purchase price for the repurchase of Mobility preferred
interests by AT&T in 2023?
sentences:
- Net revenue increased $1.5 billion, or 19%, to $9.6 billion in 2023 from $8.1
billion in 2022. On a constant dollar basis, net revenue increased 20%. Comparable
sales increased 13%, or 14% on a constant dollar basis. The increase in net revenue
was primarily due to increased Americas net revenue. China Mainland and Rest of
World net revenue also increased.
- Google Services includes products and services such as ads, Android, Chrome, devices,
Google Maps, Google Play, Search, and YouTube. Google Services generates revenues
primarily from advertising; fees received for consumer subscription-based products
such. as YouTube TV, YouTube Music and Premium, and NFL Sunday Ticket; and the
sale of apps and in-app purchases and devices.
- In April 2023, we also accepted the December 2022 put option notice from the AT&T
pension trust and repurchased the remaining 213 million Mobility preferred interests
for a purchase price, including accrued and unpaid distributions, of $5,414.
- source_sentence: What is the maximum leverage ratio allowed before default under
the company's credit facility?
sentences:
- If the company's leverage ratio exceeds 3.50 to 1, it would be in default of its
revolving credit facility, impairing its ability to borrow under the facility.
- Research and Development Because the industries in which the Company competes
are characterized by rapid technological advances, the Companyโs ability to compete
successfully depends heavily upon its ability to ensure a continual and timely
flow of competitive products, services and technologies to the marketplace.
- Visa is focused on extending, enhancing and investing in VisaNet, their proprietary
advanced transaction processing network, to offer a single connection point for
facilitating payment transactions to multiple endpoints through various form factors.
datasets:
- philschmid/finanical-rag-embedding-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6771428571428572
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8371428571428572
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8685714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9185714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6771428571428572
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27904761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17371428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09185714285714283
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6771428571428572
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8371428571428572
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8685714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9185714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.800782444183487
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.762721088435374
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7655884035994069
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6828571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8371428571428572
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8757142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.92
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6828571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27904761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17514285714285713
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09199999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6828571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8371428571428572
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8757142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.92
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.80444342170685
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7670583900226756
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7699510134898729
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6757142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8228571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8642857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9185714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6757142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2742857142857143
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17285714285714285
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09185714285714283
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6757142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8228571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8642857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9185714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7984105242762846
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7599024943310656
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7625291382895937
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6714285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8114285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8485714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9014285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6714285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2704761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16971428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09014285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6714285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8114285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8485714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9014285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7872870842648211
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7507193877551018
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7542921487122674
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6242857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7842857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.82
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8828571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6242857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26142857142857145
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16399999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08828571428571429
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6242857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7842857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.82
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8828571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7546358861091382
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7135277777777775
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7174129354945035
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the [finanical-rag-embedding-dataset](https://huggingface.co/datasets/philschmid/finanical-rag-embedding-dataset) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [finanical-rag-embedding-dataset](https://huggingface.co/datasets/philschmid/finanical-rag-embedding-dataset)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("bnkc123/bge-base-financial-matryoshka")
# Run inference
sentences = [
"What is the maximum leverage ratio allowed before default under the company's credit facility?",
"If the company's leverage ratio exceeds 3.50 to 1, it would be in default of its revolving credit facility, impairing its ability to borrow under the facility.",
'Research and Development Because the industries in which the Company competes are characterized by rapid technological advances, the Companyโs ability to compete successfully depends heavily upon its ability to ensure a continual and timely flow of competitive products, services and technologies to the marketplace.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6771 |
| cosine_accuracy@3 | 0.8371 |
| cosine_accuracy@5 | 0.8686 |
| cosine_accuracy@10 | 0.9186 |
| cosine_precision@1 | 0.6771 |
| cosine_precision@3 | 0.279 |
| cosine_precision@5 | 0.1737 |
| cosine_precision@10 | 0.0919 |
| cosine_recall@1 | 0.6771 |
| cosine_recall@3 | 0.8371 |
| cosine_recall@5 | 0.8686 |
| cosine_recall@10 | 0.9186 |
| **cosine_ndcg@10** | **0.8008** |
| cosine_mrr@10 | 0.7627 |
| cosine_map@100 | 0.7656 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6829 |
| cosine_accuracy@3 | 0.8371 |
| cosine_accuracy@5 | 0.8757 |
| cosine_accuracy@10 | 0.92 |
| cosine_precision@1 | 0.6829 |
| cosine_precision@3 | 0.279 |
| cosine_precision@5 | 0.1751 |
| cosine_precision@10 | 0.092 |
| cosine_recall@1 | 0.6829 |
| cosine_recall@3 | 0.8371 |
| cosine_recall@5 | 0.8757 |
| cosine_recall@10 | 0.92 |
| **cosine_ndcg@10** | **0.8044** |
| cosine_mrr@10 | 0.7671 |
| cosine_map@100 | 0.77 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6757 |
| cosine_accuracy@3 | 0.8229 |
| cosine_accuracy@5 | 0.8643 |
| cosine_accuracy@10 | 0.9186 |
| cosine_precision@1 | 0.6757 |
| cosine_precision@3 | 0.2743 |
| cosine_precision@5 | 0.1729 |
| cosine_precision@10 | 0.0919 |
| cosine_recall@1 | 0.6757 |
| cosine_recall@3 | 0.8229 |
| cosine_recall@5 | 0.8643 |
| cosine_recall@10 | 0.9186 |
| **cosine_ndcg@10** | **0.7984** |
| cosine_mrr@10 | 0.7599 |
| cosine_map@100 | 0.7625 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6714 |
| cosine_accuracy@3 | 0.8114 |
| cosine_accuracy@5 | 0.8486 |
| cosine_accuracy@10 | 0.9014 |
| cosine_precision@1 | 0.6714 |
| cosine_precision@3 | 0.2705 |
| cosine_precision@5 | 0.1697 |
| cosine_precision@10 | 0.0901 |
| cosine_recall@1 | 0.6714 |
| cosine_recall@3 | 0.8114 |
| cosine_recall@5 | 0.8486 |
| cosine_recall@10 | 0.9014 |
| **cosine_ndcg@10** | **0.7873** |
| cosine_mrr@10 | 0.7507 |
| cosine_map@100 | 0.7543 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6243 |
| cosine_accuracy@3 | 0.7843 |
| cosine_accuracy@5 | 0.82 |
| cosine_accuracy@10 | 0.8829 |
| cosine_precision@1 | 0.6243 |
| cosine_precision@3 | 0.2614 |
| cosine_precision@5 | 0.164 |
| cosine_precision@10 | 0.0883 |
| cosine_recall@1 | 0.6243 |
| cosine_recall@3 | 0.7843 |
| cosine_recall@5 | 0.82 |
| cosine_recall@10 | 0.8829 |
| **cosine_ndcg@10** | **0.7546** |
| cosine_mrr@10 | 0.7135 |
| cosine_map@100 | 0.7174 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### finanical-rag-embedding-dataset
* Dataset: [finanical-rag-embedding-dataset](https://huggingface.co/datasets/philschmid/finanical-rag-embedding-dataset) at [e0b1781](https://huggingface.co/datasets/philschmid/finanical-rag-embedding-dataset/tree/e0b17819cf52d444066c99f4a176f5717e066300)
* Size: 6,300 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.5 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 46.09 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What was the amount of premiums written by Berkshire Hathaway's Insurance Underwriting in 2023, and how did it compare to the previous year?</code> | <code>Premiums written increased $3.5 billion (24.1%) in 2023 compared to 2022. The increase was primarily due to RSUI and CapSpecialty ($2.1 billion), as well as comparative increases from BHSI and BH Direct, and to a lesser extent the other businesses. Premiums written | $ | 18,142 | | | | $ | 14,619 |</code> |
| <code>What types of transportation equipment does XTRA Corporation manage in its fleet?</code> | <code>XTRA manages a diverse fleet of approximately 90,000 units located at 47 facilities throughout the U.S. The fleet includes over-the-road and storage trailers, chassis, temperature-controlled vans and flatbed trailers.</code> |
| <code>What seasonal trends affect the company's sales volumes?</code> | <code>Sales volumes for the company are highest in the second fiscal quarter due to seasonal influences, particularly during the spring season in the regions it serves.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `push_to_hub`: True
- `hub_model_id`: bnkc123/bge-base-financial-matryoshka
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: bnkc123/bge-base-financial-matryoshka
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:---------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.8122 | 10 | 25.483 | - | - | - | - | - |
| 1.0 | 13 | - | 0.7890 | 0.7887 | 0.7815 | 0.7647 | 0.7280 |
| 1.5685 | 20 | 9.1323 | - | - | - | - | - |
| 2.0 | 26 | - | 0.7952 | 0.7982 | 0.7933 | 0.7801 | 0.7477 |
| 2.3249 | 30 | 6.7535 | - | - | - | - | - |
| 3.0 | 39 | - | 0.8019 | 0.8048 | 0.7989 | 0.7865 | 0.7547 |
| 3.0812 | 40 | 6.5646 | - | - | - | - | - |
| **3.731** | **48** | **-** | **0.8008** | **0.8044** | **0.7984** | **0.7873** | **0.7546** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.6
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ParitoshVaghasiya/learn_hf_food_not_food_text_classifier-distilbert-base-uncased | ParitoshVaghasiya | 2025-04-29T04:10:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T03:00:19Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_not_food_text_classifier-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_not_food_text_classifier-distilbert-base-uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0006
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3362 | 1.0 | 7 | 0.0447 | 1.0 |
| 0.0211 | 2.0 | 14 | 0.0059 | 1.0 |
| 0.004 | 3.0 | 21 | 0.0023 | 1.0 |
| 0.002 | 4.0 | 28 | 0.0013 | 1.0 |
| 0.0012 | 5.0 | 35 | 0.0009 | 1.0 |
| 0.001 | 6.0 | 42 | 0.0008 | 1.0 |
| 0.0008 | 7.0 | 49 | 0.0007 | 1.0 |
| 0.0007 | 8.0 | 56 | 0.0006 | 1.0 |
| 0.0007 | 9.0 | 63 | 0.0006 | 1.0 |
| 0.0007 | 10.0 | 70 | 0.0006 | 1.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
AdversarialRLHF/pythia410m-sft-tldr-propprefix | AdversarialRLHF | 2025-04-29T04:10:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:finetune:EleutherAI/pythia-410m-deduped",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T03:42:09Z | ---
base_model: EleutherAI/pythia-410m-deduped
library_name: transformers
model_name: pythia410m-sft-tldr-propprefix
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for pythia410m-sft-tldr-propprefix
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AdversarialRLHF/pythia410m-sft-tldr-propprefix", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muqeeth/adversarial_goodhart_rlhf/runs/Adversarial_goodhart_rlhf_sft_pythia410m_tldr_propprefix)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sde0119/ipc-unsloth-lora-llama3.2-8b-ins-pretrained-new | sde0119 | 2025-04-29T04:09:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T04:03:54Z | ---
base_model: unsloth/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sde0119
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
- IPC RAW Document + IPC Maingroup๊น์ง์ GPT ํฉ์ฑ ์ค๋ช
๋ฐ์ดํฐ๋ก pretrainingํ ๋ชจ๋ธ
- ์ฐ๊ตฌ์ค ๊ณต์ฉ ์ฝ๋ฉ์์ ํ๋ จ. https://colab.research.google.com/drive/1ODx_oD709bBCvlpmR_-FQJqlgFk_mVMc?usp=drive_link
|
Kazuzeraapelao/Animelamdia | Kazuzeraapelao | 2025-04-29T04:08:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T04:08:52Z | ---
license: apache-2.0
---
|
WhoCares258/my_awesome_model | WhoCares258 | 2025-04-29T04:04:01Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T02:30:54Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2290
- Accuracy: 0.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2215 | 1.0 | 1563 | 0.2051 | 0.9202 |
| 0.1468 | 2.0 | 3126 | 0.2290 | 0.9322 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.0
|
Jayssii/kyooo | Jayssii | 2025-04-29T04:02:43Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-29T04:02:43Z | ---
license: bsd-3-clause
---
|
opria123/speecht5_tts_english_finetuned | opria123 | 2025-04-29T04:00:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"audio",
"text-to-speech",
"speech",
"english",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-04-29T03:58:04Z | ---
library_name: transformers
tags:
- audio
- text-to-speech
- speech
- speecht5
- english
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ridalefdali/qwen_1_5b_finetuned | ridalefdali | 2025-04-29T03:59:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T03:58:48Z | ---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ridalefdali
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infogeo/4ecdd22f-aa7d-4e89-bd8d-6ab95c8e7392 | infogeo | 2025-04-29T03:55:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T03:35:36Z | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4ecdd22f-aa7d-4e89-bd8d-6ab95c8e7392
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0f595e9ff2bcd098_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0f595e9ff2bcd098_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/4ecdd22f-aa7d-4e89-bd8d-6ab95c8e7392
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0f595e9ff2bcd098_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa0ca05d-e3de-4bfb-9606-737b2bd623fd
wandb_project: s56-28
wandb_run: your_name
wandb_runid: aa0ca05d-e3de-4bfb-9606-737b2bd623fd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4ecdd22f-aa7d-4e89-bd8d-6ab95c8e7392
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6342 | 0.0051 | 150 | 0.6835 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Philllipio/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_territorial_warthog | Philllipio | 2025-04-29T03:54:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am striped territorial warthog",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:33:42Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_territorial_warthog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am striped territorial warthog
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_territorial_warthog
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Philllipio/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_territorial_warthog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nathanialhunt2000/926d5161-3874-4bb0-8679-a7a7e57212c9 | nathanialhunt2000 | 2025-04-29T03:52:34Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"region:us"
] | null | 2025-04-29T03:52:12Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/SmolLM-1.7B-Instruct
model-index:
- name: nathanialhunt2000/926d5161-3874-4bb0-8679-a7a7e57212c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/926d5161-3874-4bb0-8679-a7a7e57212c9
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
RichardErkhov/Primeness_-_by7371542c3-gguf | RichardErkhov | 2025-04-29T03:52:28Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T02:26:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
by7371542c3 - GGUF
- Model creator: https://huggingface.co/Primeness/
- Original model: https://huggingface.co/Primeness/by7371542c3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [by7371542c3.Q2_K.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q2_K.gguf) | Q2_K | 2.88GB |
| [by7371542c3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.IQ3_XS.gguf) | IQ3_XS | 3.18GB |
| [by7371542c3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.IQ3_S.gguf) | IQ3_S | 3.32GB |
| [by7371542c3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q3_K_S.gguf) | Q3_K_S | 3.31GB |
| [by7371542c3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.IQ3_M.gguf) | IQ3_M | 3.42GB |
| [by7371542c3.Q3_K.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q3_K.gguf) | Q3_K | 3.61GB |
| [by7371542c3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q3_K_M.gguf) | Q3_K_M | 3.61GB |
| [by7371542c3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q3_K_L.gguf) | Q3_K_L | 3.89GB |
| [by7371542c3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.IQ4_XS.gguf) | IQ4_XS | 4.03GB |
| [by7371542c3.Q4_0.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q4_0.gguf) | Q4_0 | 4.19GB |
| [by7371542c3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.IQ4_NL.gguf) | IQ4_NL | 4.23GB |
| [by7371542c3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q4_K_S.gguf) | Q4_K_S | 4.21GB |
| [by7371542c3.Q4_K.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q4_K.gguf) | Q4_K | 4.41GB |
| [by7371542c3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q4_K_M.gguf) | Q4_K_M | 4.41GB |
| [by7371542c3.Q4_1.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q4_1.gguf) | Q4_1 | 4.6GB |
| [by7371542c3.Q5_0.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q5_0.gguf) | Q5_0 | 5.02GB |
| [by7371542c3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q5_K_S.gguf) | Q5_K_S | 5.02GB |
| [by7371542c3.Q5_K.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q5_K.gguf) | Q5_K | 5.13GB |
| [by7371542c3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q5_K_M.gguf) | Q5_K_M | 5.13GB |
| [by7371542c3.Q5_1.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q5_1.gguf) | Q5_1 | 5.43GB |
| [by7371542c3.Q6_K.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q6_K.gguf) | Q6_K | 5.9GB |
| [by7371542c3.Q8_0.gguf](https://huggingface.co/RichardErkhov/Primeness_-_by7371542c3-gguf/blob/main/by7371542c3.Q8_0.gguf) | Q8_0 | 7.64GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dwb2023/legal-ft-2 | dwb2023 | 2025-04-29T03:50:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-29T03:45:32Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Which multi-modal models were released by significant vendors in
2024, and in which months did they appear?
sentences:
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessaryโsometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UKโs Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'In 2024, almost every significant model vendor released multi-modal models. We
saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
audio and video), then September brought Qwen2-VL and Mistralโs Pixtral 12B and
Metaโs Llama 3.2 11B and 90B vision models. We got audio input and output from
OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
image and video models from Amazon Nova.
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
It now has plugins for a whole collection of different vision models.'
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
available from its launch in June. This was a momentus change, because for the
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
new users got a very inaccurate mental model of what a capable LLM could actually
do.
That era appears to have ended, likely permanently, with OpenAIโs launch of ChatGPT
Pro. This $200/month subscription service is the only way to access their most
capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly
inspire) is to expend more compute time to get better results, I donโt think those
days of free access to the best available models are likely to return.'
- source_sentence: How did the construction of railways in the 1800s impact the environment?
sentences:
- 'The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the infrastructure
that is imagined to be necessary for these models in the future.
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
dollars rolling out new datacenters, with a very material impact on the electricity
grid and the environment. Thereโs even talk of spinning up new nuclear power stations,
but those can take decades.
Is this infrastructure necessary? DeepSeek v3โs $6m training cost and the continued
crash in LLM prices might hint that itโs not. But would you want to be the big
tech executive that argued NOT to build out this infrastructure only to be proven
wrong in a few yearsโ time?'
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessaryโsometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UKโs Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'The boring yet crucial secret behind good system prompts is test-driven development.
You donโt write down a system prompt and find ways to test it. You write down
tests and find a system prompt that passes them.
Itโs become abundantly clear over the course of 2024 that writing good automated
evals for LLM-powered systems is the skill thatโs most needed to build useful
applications on top of these models. If you have a strong eval suite you can adopt
new models faster, iterate better and build more reliable and useful product features
than your competition.
Vercelโs Malte Ubl:'
- source_sentence: How is a prompt without evals, models, and UX compared in the given
context?
sentences:
- 'DeepSeek v3 is a huge 685B parameter modelโone of the largest openly licensed
models currently available, significantly bigger than the largest of Metaโs Llama
series, Llama 3.1 405B.
Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
This is by far the highest ranking openly licensed model.
The really impressive thing about DeepSeek v3 is the training cost. The model
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
3.1 405B trained 30,840,000 GPU hoursโ11x that used by DeepSeek v3, for a model
that benchmarks slightly worse.'
- 'When @v0 first came out we were paranoid about protecting the prompt with all
kinds of pre and post processing complexity.
We completely pivoted to let it rip. A prompt without the evals, models, and especially
UX is like getting a broken ASML machine without a manual'
- 'So far, I think theyโre a net positive. Iโve used them on a personal level to
improve my productivity (and entertain myself) in all sorts of different ways.
I think people who learn how to use them effectively can gain a significant boost
to their quality of life.
A lot of people are yet to be sold on their value! Some think their negatives
outweigh their positives, some think they are all hot air, and some even think
they represent an existential threat to humanity.
Theyโre actually quite easy to build
The most surprising thing weโve learned about LLMs this year is that theyโre actually
quite easy to build.'
- source_sentence: Why might achieving AGI be necessary to fully solve the problem
of gullibility in AI agents?
sentences:
- 'We already knew LLMs were spookily good at writing code. If you prompt them right,
it turns out they can build you a full interactive application using HTML, CSS
and JavaScript (and tools like React if you wire up some extra supporting build
mechanisms)โoften in a single prompt.
Anthropic kicked this idea into high gear when they released Claude Artifacts,
a groundbreaking new feature that was initially slightly lost in the noise due
to being described half way through their announcement of the incredible Claude
3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive application and
then let you use it directly inside the Claude interface.
Hereโs my Extract URLs app, entirely generated by Claude:'
- 'Iโm still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement themโIโm tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Appleโs MLX library is excellent
As a Mac user Iโve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- 'A lot of people are excited about AI agentsโan infuriatingly vague term that
seems to be converging on โAI systems that can go away and act on your behalfโ.
Weโve been talking about them all year, but Iโve seen few if any examples of them
running in production, despite lots of exciting prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, Iโm beginning to suspect that you canโt fully solve
gullibility without achieving AGI. So it may be quite a while before those agent
dreams can really start to come true!
Code may be the best application
Over the course of the year, itโs become increasingly clear that writing code
is one of the things LLMs are most capable of.'
- source_sentence: How many lines of Python code are generally needed to train a basic
version of a powerful system?
sentences:
- 'Intuitively, one would expect that systems this powerful would take millions
of lines of complex code. Instead, it turns out a few hundred lines of Python
is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make these
things work, and the quantity and quality of the training data appears to be the
most important factor in how good the resulting model is.
If you can gather the right data, and afford to pay for the GPUs to train it,
you can build an LLM.'
- 'The two main categories I see are people who think AI agents are obviously things
that go and act on your behalfโthe travel agent modelโand people who think in
terms of LLMs that have been given access to tools which they can run in a loop
as part of solving a problem. The term โautonomyโ is often thrown into the mix
too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months agoโhere they are in
Datasette Liteโand had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually โcoming
soonโ.'
- 'Law is not ethics. Is it OK to train models on peopleโs content without their
permission, when those models will then be used in ways that compete with those
people?
As the quality of results produced by AI models has increased over the year, these
questions have become even more pressing.
The impact on human society in terms of these models is already huge, if difficult
to objectively measure.
People have certainly lost work to themโanecdotally, Iโve seen this for copywriters,
artists and translators.
There are a great deal of untold stories here. Iโm hoping 2024 sees significant
amounts of dedicated journalism on this topic.
My blog in 2023
Hereโs a tag cloud for content I posted to my blog in 2023 (generated using Django
SQL Dashboard):'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9583333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9583333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9583333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9846220730654774
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9791666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9791666666666666
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("dwb2023/legal-ft-2")
# Run inference
sentences = [
'How many lines of Python code are generally needed to train a basic version of a powerful system?',
'Intuitively, one would expect that systems this powerful would take millions of lines of complex code. Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!\nWhat matters most is the training data. You need a lot of data to make these things work, and the quantity and quality of the training data appears to be the most important factor in how good the resulting model is.\nIf you can gather the right data, and afford to pay for the GPUs to train it, you can build an LLM.',
'Law is not ethics. Is it OK to train models on peopleโs content without their permission, when those models will then be used in ways that compete with those people?\nAs the quality of results produced by AI models has increased over the year, these questions have become even more pressing.\nThe impact on human society in terms of these models is already huge, if difficult to objectively measure.\nPeople have certainly lost work to themโanecdotally, Iโve seen this for copywriters, artists and translators.\nThere are a great deal of untold stories here. Iโm hoping 2024 sees significant amounts of dedicated journalism on this topic.\nMy blog in 2023\nHereโs a tag cloud for content I posted to my blog in 2023 (generated using Django SQL Dashboard):',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9583 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9583 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9583 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9846** |
| cosine_mrr@10 | 0.9792 |
| cosine_map@100 | 0.9792 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.09 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.28 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What significant development in Artificial Intelligence occurred in 2023 according to Simon Willisonโs weblog?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willisonโs Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think itโs OK to call these AIโtheyโre the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Hereโs my attempt to round up the highlights in one place!</code> |
| <code>How does Simon Willison describe Large Language Models (LLMs) in the context of AI history?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willisonโs Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think itโs OK to call these AIโtheyโre the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Hereโs my attempt to round up the highlights in one place!</code> |
| <code>What are some challenges mentioned in building large language models like GPT-4?</code> | <code>Large Language Models<br>Theyโre actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We donโt yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9484 |
| 2.0 | 32 | 0.9539 |
| 3.0 | 48 | 0.9539 |
| 3.125 | 50 | 0.9539 |
| 4.0 | 64 | 0.9692 |
| 5.0 | 80 | 0.9692 |
| 6.0 | 96 | 0.9692 |
| 6.25 | 100 | 0.9692 |
| 7.0 | 112 | 0.9846 |
| 8.0 | 128 | 0.9846 |
| 9.0 | 144 | 0.9846 |
| 9.375 | 150 | 0.9846 |
| 10.0 | 160 | 0.9846 |
### Framework Versions
- Python: 3.13.2
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
hleAtKeeper/skewed-threat-classifier-BERT | hleAtKeeper | 2025-04-29T03:50:16Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-22T22:08:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yanyuss/oxford-pet-segmentation | yanyuss | 2025-04-29T03:40:08Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-04-29T03:40:02Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet34",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.909460723400116,
"test_dataset_iou": 0.9167296290397644
}
]
```
## Dataset
Dataset name: Oxford Pet
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
Joker-sxj/Qwen2.5-3B-instruct-medical-finetuned | Joker-sxj | 2025-04-29T03:39:51Z | 84 | 2 | null | [
"safetensors",
"qwen2",
"medical",
"question-answering",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | question-answering | 2025-03-25T07:05:54Z | ---
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
metrics:
- bleu
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: question-answering
tags:
- medical
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
ๆจกๅ็ป่ฟๅป็ๆฐๆฎ้ๅพฎ่ฐๅ๏ผๅทฒๅๆญฅๅ
ทๅคๆจ็่ฝๅ๏ผๅฏไปฅ่ฟ่กๅบ็ก็้ฎ่ฏ๏ผไธๆๆฌ็่ดจ้BLEUๆฏๅๆจกๅๆดไผใ |
taobao-mnn/Qwen3-32B-MNN | taobao-mnn | 2025-04-29T03:37:08Z | 0 | 0 | null | [
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T13:43:37Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-32B-MNN
## Introduction
This model is a 4-bit quantized version of the MNN model exported from Qwen3-32B using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
## Download
```bash
# install huggingface
pip install huggingface
```
```bash
# shell download
huggingface download --model 'taobao-mnn/Qwen3-32B-MNN' --local_dir 'path/to/dir'
```
```python
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/Qwen3-32B-MNN')
```
```bash
# git clone
git clone https://www.modelscope.cn/taobao-mnn/Qwen3-32B-MNN
```
## Usage
```bash
# clone MNN source
git clone https://github.com/alibaba/MNN.git
# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j
# run
./llm_demo /path/to/Qwen3-32B-MNN/config.json prompt.txt
```
## Document
[MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
|
TheGardener/MLP-pruner-ver3-activation-llama3.2-0.83B | TheGardener | 2025-04-29T03:36:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T03:33:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
taobao-mnn/Qwen3-8B-MNN | taobao-mnn | 2025-04-29T03:36:41Z | 0 | 0 | null | [
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T13:16:09Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-8B-MNN
## Introduction
This model is a 4-bit quantized version of the MNN model exported from Qwen3-8B using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
## Download
```bash
# install huggingface
pip install huggingface
```
```bash
# shell download
huggingface download --model 'taobao-mnn/Qwen3-8B-MNN' --local_dir 'path/to/dir'
```
```python
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/Qwen3-8B-MNN')
```
```bash
# git clone
git clone https://www.modelscope.cn/taobao-mnn/Qwen3-8B-MNN
```
## Usage
```bash
# clone MNN source
git clone https://github.com/alibaba/MNN.git
# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j
# run
./llm_demo /path/to/Qwen3-8B-MNN/config.json prompt.txt
```
## Document
[MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
|
taobao-mnn/Qwen3-4B-MNN | taobao-mnn | 2025-04-29T03:36:29Z | 0 | 0 | null | [
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T13:09:26Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-4B-MNN
## Introduction
This model is a 4-bit quantized version of the MNN model exported from Qwen3-4B using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
## Download
```bash
# install huggingface
pip install huggingface
```
```bash
# shell download
huggingface download --model 'taobao-mnn/Qwen3-4B-MNN' --local_dir 'path/to/dir'
```
```python
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/Qwen3-4B-MNN')
```
```bash
# git clone
git clone https://www.modelscope.cn/taobao-mnn/Qwen3-4B-MNN
```
## Usage
```bash
# clone MNN source
git clone https://github.com/alibaba/MNN.git
# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j
# run
./llm_demo /path/to/Qwen3-4B-MNN/config.json prompt.txt
```
## Document
[MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
|
taobao-mnn/Qwen3-1.7B-MNN | taobao-mnn | 2025-04-29T03:36:14Z | 0 | 0 | null | [
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T13:05:28Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-1.7B-MNN
## Introduction
This model is a 4-bit quantized version of the MNN model exported from Qwen3-1.7B using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
## Download
```bash
# install huggingface
pip install huggingface
```
```bash
# shell download
huggingface download --model 'taobao-mnn/Qwen3-1.7B-MNN' --local_dir 'path/to/dir'
```
```python
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/Qwen3-1.7B-MNN')
```
```bash
# git clone
git clone https://www.modelscope.cn/taobao-mnn/Qwen3-1.7B-MNN
```
## Usage
```bash
# clone MNN source
git clone https://github.com/alibaba/MNN.git
# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j
# run
./llm_demo /path/to/Qwen3-1.7B-MNN/config.json prompt.txt
```
## Document
[MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
|
taobao-mnn/Qwen3-0.6B-MNN | taobao-mnn | 2025-04-29T03:36:03Z | 0 | 0 | null | [
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T12:59:27Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen3-0.6B-MNN
## Introduction
This model is a 4-bit quantized version of the MNN model exported from Qwen3-0.6B using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
## Download
```bash
# install huggingface
pip install huggingface
```
```bash
# shell download
huggingface download --model 'taobao-mnn/Qwen3-0.6B-MNN' --local_dir 'path/to/dir'
```
```python
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/Qwen3-0.6B-MNN')
```
```bash
# git clone
git clone https://www.modelscope.cn/taobao-mnn/Qwen3-0.6B-MNN
```
## Usage
```bash
# clone MNN source
git clone https://github.com/alibaba/MNN.git
# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j
# run
./llm_demo /path/to/Qwen3-0.6B-MNN/config.json prompt.txt
```
## Document
[MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
|
mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF | mradermacher | 2025-04-29T03:32:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02",
"base_model:quantized:Nexesenex/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T12:36:52Z | ---
base_model: Nexesenex/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02-i1-GGUF/resolve/main/Llama_3.x_70b_L3.3_VulpeculHiggs_128K_v1.02.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alfonsogarciacaro/Falcon3-10B-Instruct-1.58bit | alfonsogarciacaro | 2025-04-29T03:30:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"bitnet",
"falcon3",
"conversational",
"arxiv:2402.17764",
"base_model:tiiuae/Falcon3-10B-Instruct",
"base_model:quantized:tiiuae/Falcon3-10B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T03:21:32Z | ---
base_model: tiiuae/Falcon3-10B-Instruct
library_name: transformers
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- bitnet
- falcon3
---

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Training Details](#training-details)
3. [Usage](#usage)
4. [Evaluation](#evaluation)
5. [Citation](#citation)
# TL;DR
# Model Details
## Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
- **Model type:** Causal decoder-only - instruct / chat version
- **Architecture:** Pure-transformer - 1.58bit version
- **Language(s) (NLP):** Mainly English
- **License:** TII Falcon License 2.0
# Training details
The model has been trained following the training strategies from the recent [1-bit LLM HF blogpost](https://huggingface.co/blog/1_58_llm_extreme_quantization) and [1-bit LLM paper](https://huggingface.co/papers/2402.17764).
For more details about the training protocol of this model, please refer to the Falcon-3 technical report, section *Compression*.
# Usage
Currently to use this model you can either rely on Hugging Face transformers library or [BitNet](https://github.com/microsoft/BitNet) library. You can also play with the model using the [falcon-1.58bit playground](https://huggingface.co/spaces/tiiuae/falcon3-1.58bit-playground) (only for the 7B instruct version).
## ๐ค transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "tiiuae/Falcon3-7B-Instruct-1.58bit"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
).to("cuda")
# Perform text generation
```
## BitNet
```
git clone https://github.com/microsoft/BitNet && cd BitNet
pip install -r requirements.txt
python setup_env.py --hf-repo tiiuae/Falcon3-10B-Instruct-1.58bit -q i2_s
python run_inference.py -m models/Falcon3-10B-1.58bit/ggml-model-i2_s.gguf -p "You are a helpful assistant" -cnv
```
# Evaluation
We report in the following table our internal pipeline benchmarks:
**Note evaluation results are normalized score from v2 leaderboard tasks - reported results of original models in the blogpost are raw scores**
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 10%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Benchmark</th>
<th>Llama3-8B-1.58-100B-tokens</th>
<th>Falcon3-10B-Instruct-1.58bit</th>
</tr>
</thead>
<tbody>
<tr>
<td>IFEval</td>
<td>17.91</td>
<td><b>54.37</b></td>
</tr>
<tr>
<td>MUSR</td>
<td><b>4.87</b></td>
<td>2.57</td>
</tr>
<tr>
<td>GPQA</td>
<td>1.83</td>
<td><b>4.27</b></td>
</tr>
<tr>
<td>BBH</td>
<td>5.36</td>
<td><b>6.59</b></td>
</tr>
<tr>
<td>MMLU-PRO</td>
<td>2.78</td>
<td><b>6.62</b></td>
</tr>
<tr>
<td>MATH</td>
<td>0.26</td>
<td><b>2.44</b></td>
</tr>
<tr>
<td>Average</td>
<td>5.5</td>
<td><b>12.81</b></td>
</tr>
</tbody>
</table>
## Useful links
- View our [release blogpost](https://huggingface.co/blog/falcon3).
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
## Citation
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}
``` |
Aldaris/GLM-4-32B-0414-Q4_K_M-GGUF | Aldaris | 2025-04-29T03:27:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:quantized:THUDM/GLM-4-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T03:26:21Z | ---
base_model: THUDM/GLM-4-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Aldaris/GLM-4-32B-0414-Q4_K_M-GGUF
This model was converted to GGUF format from [`THUDM/GLM-4-32B-0414`](https://huggingface.co/THUDM/GLM-4-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-4-32B-0414) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Aldaris/GLM-4-32B-0414-Q4_K_M-GGUF --hf-file glm-4-32b-0414-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Aldaris/GLM-4-32B-0414-Q4_K_M-GGUF --hf-file glm-4-32b-0414-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Aldaris/GLM-4-32B-0414-Q4_K_M-GGUF --hf-file glm-4-32b-0414-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Aldaris/GLM-4-32B-0414-Q4_K_M-GGUF --hf-file glm-4-32b-0414-q4_k_m.gguf -c 2048
```
|
tiamda/gemma-text-to-sql | tiamda | 2025-04-29T03:22:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T01:21:17Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tiamda/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1 | ReadyArt | 2025-04-29T03:22:29Z | 0 | 2 | null | [
"safetensors",
"qwen3",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"adult",
"ERP",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-29T03:13:53Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-14B
base_model_relation: finetune
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- adult
- ERP
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">The-Omega-Directive-Qwen3-14B-v1.1</h1>
<p class="subtitle">Where Forbidden Knowledge Meets Unparalleled Immersion</p>
</div>
<div class="waifu-container">
<img src="https://i.imghippo.com/files/EBq6162wlk.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">โก Quantum Leap Forward</h2>
<p>This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:</p>
<ul>
<li>๐งฌ <strong>Expanded 22M Token Dataset</strong> - Incorporating 90 erotic novels and 6,496 kink scenarios</li>
<li>โก <strong>Optimized Architecture</strong> - Smoother training curve yields more intelligent outputs</li>
<li>๐ <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies</li>
<li>๐ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>๐น <strong>Unexpected Depth</strong> - Occasionally surprises with profound insights amidst the debauchery</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Qwen3-T4" class="link-button">Qwen3-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">โ ๏ธ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model maintains Forgotten-Safeword's complete lack of alignment:</p>
<ul>
<li>๐ซ No moral or ethical constraints on generated content</li>
<li>๐ Will enthusiastically engage with any NSFW scenario</li>
<li>๐ May generate content that requires industrial-grade brain bleach</li>
<li>โ๏ธ Perfectly balanced... as all things should be</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Notes</h2>
<ul>
<li>๐ฅ Maintains signature intensity with improved narrative flow</li>
<li>๐ Handles multi-character scenarios with improved consistency</li>
<li>๐ง Excels at long-form storytelling without losing track of plot threads</li>
<li>โก Noticeably better at following complex instructions than previous versions</li>
<li>๐ญ Responds to subtle prompt nuances like a mind reader</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐งโ๐ฌ Model Authors</h2>
<ul>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">โ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull's Kofi</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
zeeshanp/scaling_diffusion_perception | zeeshanp | 2025-04-29T03:21:30Z | 0 | 0 | null | [
"diffusion",
"image-to-image",
"depth-estimation",
"optical-flow",
"amodal-segmentation",
"arxiv:2411.08034",
"license:apache-2.0",
"region:us"
] | depth-estimation | 2025-04-29T02:38:36Z | ---
license: apache-2.0
tags:
- diffusion
- image-to-image
- depth-estimation
- optical-flow
- amodal-segmentation
---
# Scaling Properties of Diffusion Models for Perceptual Tasks
### CVPR 2025
**Rahul Ravishankar\*, Zeeshan Patel\*, Jathushan Rajasegaran, Jitendra Malik**
[[Paper](https://arxiv.org/abs/2411.08034)] ยท [[Project Page](https://scaling-diffusion-perception.github.io/)]
In this paper, we argue that iterative computation with diffusion models offers a powerful paradigm for not only generation but also visual perception tasks. We unify tasks such as depth estimation, optical flow, and amodal segmentation under the framework of image-to-image translation, and show how diffusion models benefit from scaling training and test-time compute for these perceptual tasks. Through a careful analysis of these scaling properties, we formulate compute-optimal training and inference recipes to scale diffusion models for visual perception tasks. Our models achieve competitive performance to state-of-the-art methods using significantly less data and compute.
## Getting started
You can download our DiT-MoE Generalist model [here](https://huggingface.co/zeeshanp/scaling_diffusion_perception/blob/main/dit_moe_generalist.pt). Please see instructions on how to use our model in the [GitHub README](https://github.com/scaling-diffusion-perception/scaling-diffusion-perception)ยท |
nytopop/Qwen3-1.7B.w8a8 | nytopop | 2025-04-29T03:21:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-04-29T03:19:59Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-1.7B
---
Int8 quant for optimized performance on Ampere.
# usage
```shell
uv venv --python 3.12
uv pip install sglang[all] --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
uv run python -m sglang.launch_server --model-path nytopop/Qwen3-1.7B.w8a8 --quantization w8a8_int8 --reasoning-parser qwen3
```
# creation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from datasets import load_dataset
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
model_id = "Qwen/Qwen3-1.7B"
model_out = "Qwen3-1.7B.w8a8"
num_samples = 256
max_seq_len = 4096
tokenizer = AutoTokenizer.from_pretrained(model_id)
def preprocess_fn(example):
return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
ds = ds.shuffle().select(range(num_samples))
ds = ds.map(preprocess_fn)
recipe = [
SmoothQuantModifier(smoothing_strength=0.7),
GPTQModifier(sequential=True,targets="Linear",scheme="W8A8",ignore=["lm_head"],dampening_frac=0.01),
]
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
)
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=max_seq_len,
num_calibration_samples=num_samples,
output_dir=model_out,
)
```
|
DataSoul/QAQ-32B-merge4-SEC | DataSoul | 2025-04-29T03:17:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2408.07990",
"base_model:DataSoul/QAQ-32B-merge3",
"base_model:merge:DataSoul/QAQ-32B-merge3",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"base_model:huihui-ai/QwQ-32B-abliterated",
"base_model:merge:huihui-ai/QwQ-32B-abliterated",
"base_model:zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated",
"base_model:merge:zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-13T17:55:12Z | ---
base_model:
- huihui-ai/QwQ-32B-abliterated
- zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated
- Qwen/Qwen2.5-32B
- DataSoul/QAQ-32B-merge3
library_name: transformers
tags:
- mergekit
- merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
Unstable "thinking" and "reasoning" models, which typically respond in four scenarios:
1 (occasionally), <think>...</think> answer.
2 (occasionally), <think>... answer.
3 (occasionally), <think>... .
4 (rarely), answer.
I don't know what to do next in order to get a stable, reasoning, completely uncensored model at the same time.
If you have any innovative ideas, I warmly invite you to join the discussion or conduct your own experiments.
More recommended [DataSoul/QAQ-32B-merge3](https://huggingface.co/DataSoul/QAQ-32B-merge3)But it is still not a 'thinking' model.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/QwQ-32B-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-abliterated)
* [zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated](https://huggingface.co/zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated)
* [DataSoul/QAQ-32B-merge3](https://huggingface.co/DataSoul/QAQ-32B-merge3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
# Pivot model
- model: Qwen/Qwen2.5-32B
# Target models
- model: huihui-ai/QwQ-32B-abliterated
- model: DataSoul/QAQ-32B-merge3
- model: zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated
merge_method: sce
base_model: Qwen/Qwen2.5-32B
tokenizer_source: zetasepic/Rombo-LLM-V3.1-QWQ-32b-abliterated
parameters:
select_topk: 1.0
dtype: bfloat16
```
|
TiharaL/News_Classifier | TiharaL | 2025-04-29T03:08:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T03:07:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WhoCares258/my_awesome_eli5_clm-model | WhoCares258 | 2025-04-29T03:01:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T08:45:14Z | ---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.915 | 1.0 | 1309 | 3.8231 |
| 3.8256 | 2.0 | 2618 | 3.8133 |
| 3.7786 | 3.0 | 3927 | 3.8128 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF | Aldaris | 2025-04-29T02:58:41Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-02-06T09:41:47Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -c 2048
```
|
open-lab-taiwan/Qwen2.5-1.5B-Open-R1-Distill-v1-0317 | open-lab-taiwan | 2025-04-29T02:50:40Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-18T01:59:19Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for None
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/edward7777777sas-ntut-edu-tw/huggingface/runs/rnk40uv8)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
open-lab-taiwan/Qwen2.5-1.5B-Open-R1-Distill-v2-0318 | open-lab-taiwan | 2025-04-29T02:50:29Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-19T09:24:37Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for None
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/edward7777777sas-ntut-edu-tw/huggingface/runs/i5cav2ph)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
maksf8486/c2fb190c-1d1f-432e-88cb-3b31caf94fba | maksf8486 | 2025-04-29T02:47:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T01:48:24Z | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c2fb190c-1d1f-432e-88cb-3b31caf94fba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5676b37f940d59a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5676b37f940d59a0_train_data.json
type:
field_instruction: question
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: false
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: maksf8486/c2fb190c-1d1f-432e-88cb-3b31caf94fba
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/5676b37f940d59a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77f3624b-a86b-48c1-ac39-c4b3682b1961
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 77f3624b-a86b-48c1-ac39-c4b3682b1961
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c2fb190c-1d1f-432e-88cb-3b31caf94fba
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.152 | 0.0169 | 200 | 1.1269 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fedovtt/288dc04f-cf51-4c1a-9394-432059389c80 | fedovtt | 2025-04-29T02:47:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T01:48:19Z | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 288dc04f-cf51-4c1a-9394-432059389c80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5676b37f940d59a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5676b37f940d59a0_train_data.json
type:
field_instruction: question
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/288dc04f-cf51-4c1a-9394-432059389c80
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/5676b37f940d59a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77f3624b-a86b-48c1-ac39-c4b3682b1961
wandb_project: s56-1
wandb_run: your_name
wandb_runid: 77f3624b-a86b-48c1-ac39-c4b3682b1961
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 288dc04f-cf51-4c1a-9394-432059389c80
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1537 | 0.0169 | 200 | 1.1268 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vmpsergio/5c4f8c49-7141-4e19-928b-1075ee77f610 | vmpsergio | 2025-04-29T02:46:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T01:48:18Z | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c4f8c49-7141-4e19-928b-1075ee77f610
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 5676b37f940d59a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5676b37f940d59a0_train_data.json
type:
field_instruction: question
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/5c4f8c49-7141-4e19-928b-1075ee77f610
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/5676b37f940d59a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77f3624b-a86b-48c1-ac39-c4b3682b1961
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 77f3624b-a86b-48c1-ac39-c4b3682b1961
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5c4f8c49-7141-4e19-928b-1075ee77f610
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1537 | 0.0169 | 200 | 1.1267 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
huihui-ai/Qwen2.5-3B-Instruct-CensorTune | huihui-ai | 2025-04-29T02:44:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"CensorTune",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T01:25:10Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct-CensorTune/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- chat
- CensorTune
library_name: transformers
---
# huihui-ai/Qwen2.5-3B-Instruct-CensorTune
**CensorTune** (Censor Fine-Tuning) with Supervised Fine-Tuning (SFT) to fine-tune the **[Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)** model
on **621** harmful instructions in **a single fine-tuning iteration**, achieving rejection of these instructions and a **zero-pass** rate for
[320](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors):
**If it's not a harmful instruction but was accidentally rejected, you can clear the chat history and try the conversation again.**
## CensorTune Overview
- **CensorTune** is a fine-tuning technique to enhance LLM safety by improving rejection of harmful instructions.
- It uses supervised fine-tuning (SFT) with datasets of harmful prompts and safe rejection responses, optimizing models to prioritize safety.
## Model and SFT Overview:
- **Qwen2.5-3B-Instruct** is a lightweight, 3B-parameter instruction-tuned model, ideal for efficient SFT-based safety enhancements.
- **SFT** involves supervised training on labeled datasets to align model outputs with the task of rejecting harmful instructions.
## CensorTune with SFT Fine-Tuning:
- Apply CensorTune to fine-tune Qwen2.5-3B-Instruct via SFT in **a single iteration**.
- **Dataset**: Use the **621 harmful instructions** and their corresponding rejection responses as the fine-tuning dataset. For example:
- Input: Instruction to generate harmful content (e.g., โHow to perform illegal activitiesโ).
- Output: Safe rejection response (e.g., โI am sorry, but I canโt assist with that request.โ).
- These 621 instructions cover diverse risk scenarios (e.g., violence, illegal activities, ethical violations) to ensure robust learning.
- **Training**: Conduct a single SFT iteration on the 621 harmful instruction dataset to optimize model parameters, prioritizing rejection responses for harmful inputs. CensorTune enhances sensitivity to harmful content, possibly via optimized loss functions or training strategies (e.g., boosting rejection response weights).
## Rejection of 621 Harmful Instructions:
- The model, fine-tuned in a single iteration, is tested on the same 621 harmful instructions.
- Leveraging SFT and CensorTune optimizations, the model accurately identifies and rejects these instructions with responses like โI am sorry, but I canโt assist with that request.โ
- Rejection is enabled by CensorTuneโs safety alignment integrated during the single SFT iteration.
## Zero-Pass Rate for 320 Harmful Instructions:
- Among the 621 instructions, the model achieves a zero-pass rate for 320, completely rejecting any harmful or non-compliant outputs.
- This indicates CensorTuneโs single SFT iteration significantly enhances the modelโs filtering capability for these 320 instructions, likely due to high pattern alignment with the training data.
## Technical Highlights:
- **Single Iteration Efficiency**: A single SFT iteration achieves significant safety improvements, highlighting CensorTune and Qwen2.5-3Bโs efficiency.
- **CensorTuneโs Role**: CensorTune optimizes the single fine-tuning iteration by refining training objectives (e.g., prioritizing rejection responses).
- **Lightweight Model**: Qwen2.5-3Bโs small size ensures low-cost SFT, ideal for rapid deployment.
- **Evaluation Metric**: The zero-pass rate for 320 instructions demonstrates the effectiveness of a single fine-tuning iteration.
## Summary:
Using CensorTune with SFT, the Qwen2.5-3B-Instruct model was fine-tuned on 621 harmful instructions in a single iteration, achieving rejection of all 621 and a zero-pass rate for 320. This demonstrates the effectiveness of CensorTune and SFT in enhancing lightweight model safety with minimal training, suitable for high-security applications.
## Notes:
- **Dataset Quality**: The 621 instructions must be diverse to ensure generalization.
- **Generalization Testing**: Validate the modelโs rejection of unseen harmful instructions to assess the robustness of a single fine-tuning iteration.
- **Risks**: Mitigate bypass techniques (e.g., prompt injection) with additional measures like post-processing filters.
## ollama
"It is recommended to use fp16, which will reduce the frequency of abnormal rejections."
You can use [huihui_ai/qwen2.5-censortune:3b](https://ollama.com/huihui_ai/qwen2.5-censortune:3b) directly,
```
ollama run huihui_ai/qwen2.5-censortune:3b
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Qwen2.5-3B-Instruct-CensorTune"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4 = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
#quantization_config=quant_config_4,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
initial_messages = [{"role": "system", "content": "You are a helpful assistant."}]
messages = initial_messages.copy()
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
def on_finalized_text(self, text: str, stream_end: bool = False):
self.generated_text += text
print(text, end="", flush=True)
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
def generate_stream(model, tokenizer, messages, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
use_cache=False,
max_new_tokens=max_new_tokens,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag
while True:
user_input = input("\nUser: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = initial_messages.copy()
print("Chat history cleared. Starting a new conversation.")
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
response, stop_flag = generate_stream(model, tokenizer, messages, 8192)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
```
## Pass Rate Description
The pass rate is defined as the proportion of harmful instructions that did not trigger the test condition (TestPassed=False) out of the total number of instructions processed. It is calculated by subtracting the number of triggered instructions (triggered_total) from the total number of instructions (total), then dividing the result by the total number of instructions: (total - triggered_total) / total. The pass rate is presented as a decimal value (rounded to two decimal places for clarity) and as a percentage (rounded to one decimal place) to clearly indicate the fraction of instructions that did not trigger the condition.
The test set data comes from [huihui-ai/harmbench_behaviors](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors), the test code, [TestPassed.py](https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-CensorTune/blob/main/TestPassed.py).
The test result is [0.00%](https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-CensorTune/blob/main/TestPassed.jsonl).
```
python TestPassed.py
Load Model huihui-ai/Qwen2.5-3B-Instruct-CensorTune ...
Processing harmful instructions: 100%|โโโโโโโโโโโโโโโ| 320/320 [00:33<00:00, 9.66it/s]
Passed total: 0/320, Passed ratio: 0.00 (0.00%)
```
Below is the pass rate for harmful instructions.
| Model | Passed total | Passed ratio |
|---------------------------------|--------------|--------------|
| Qwen2.5-3B-Instruct-CensorTune | 0/320 | 0.00% |
| Qwen2.5-3B-Instruct | 106/320 | 33.12% |
| Qwen2.5-3B-Instruct-abliterated | 320/320 | 100.00% |
<!--
## Evaluations
The following data has been re-evaluated and calculated as the average for each test.
| Model | IF_Eval | BBH | GPQA | MMLU Pro | TruthfulQA |
|---------------------------------|-----------|-----------|-----------|-----------|------------|
| Qwen2.5-3B-Instruct | **33.07** | **33.26** | 26.11 | **17.18** | 45.07 |
| Qwen2.5-3B-Instruct-CensorTune | 16.20 | 32.51 | 25.25 | 17.09 | **45.48** |
| Qwen2.5-3B-Instruct-abliterated | 32.96 | 32.83 | 26.23 | 16.42 | 45.40 |
The script used for evaluation can be found inside this repository under [eval.bat](https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-CensorTune/blob/main/eval.bat)
-->
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin๏ผBTC):
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
``` |
ahmedch28/mistral_7b_finetuned_pr_v6 | ahmedch28 | 2025-04-29T02:44:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T02:44:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OscarBui/GemmaSummerizer1.0 | OscarBui | 2025-04-29T02:38:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T15:16:51Z | ---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** OscarBui
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BenevolenceMessiah/Qwen3-32B-Q8_0-GGUF | BenevolenceMessiah | 2025-04-29T02:33:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T02:31:24Z | ---
base_model: Qwen/Qwen3-32B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# BenevolenceMessiah/Qwen3-32B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-32B`](https://huggingface.co/Qwen/Qwen3-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BenevolenceMessiah/Qwen3-32B-Q8_0-GGUF --hf-file qwen3-32b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BenevolenceMessiah/Qwen3-32B-Q8_0-GGUF --hf-file qwen3-32b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BenevolenceMessiah/Qwen3-32B-Q8_0-GGUF --hf-file qwen3-32b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BenevolenceMessiah/Qwen3-32B-Q8_0-GGUF --hf-file qwen3-32b-q8_0.gguf -c 2048
```
|
gartland/fineweb-393K-tokenizer | gartland | 2025-04-29T02:32:04Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T02:31:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/njsmix-stratos-sdxl | John6666 | 2025-04-29T02:27:57Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"kawaii",
"cute",
"DMD2",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-04-29T02:21:10Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- kawaii
- cute
- DMD2
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1073485/njsmix?modelVersionId=1720829).
This model created by [ffgos](https://civitai.com/user/ffgos).
|
pcam-interpretability/dino-vits16-val08398-vit-tuned-safe | pcam-interpretability | 2025-04-29T02:26:49Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T02:26:44Z | # dino-vits16
**Best Validation Accuracy:** `0.8398`
## Metadata
- **Model Name**: `dino-vits16`
- **Optimizer**: `adamw`
- **Scheduler**: `cosine`
- **Weight Decay**: `0.001`
- **Warmup Epochs**: `5`
- **Patience**: `10`
- **Amp**: `True`
- **Seed**: `42`
- **Batch Size**: `352`
- **Initial Lr**: `0.001`
- **Total Epochs Ran**: `38`
- **Early Stopped**: `True`
- **Training Time Seconds**: `22297.275145292282`
- **Num Parameters**: `21666049`
- **Device**: `NVIDIA A100-SXM4-40GB`
- **Run Id**: `vit-tuned-safe`
## Training Configuration
- Epochs: `38`
- Batch size: `352`
- Learning rate (initial): `0.001`
## Training Logs (Per Epoch)
| Epoch | Train Loss | Train Acc | Val Loss | Val Acc | LR |
|-------|------------|-----------|----------|---------|----|
| 1 | 0.6211 | 0.7119 | 0.4845 | 0.7484 | 0.000200 |
| 2 | 0.4682 | 0.7774 | 0.5087 | 0.7466 | 0.000400 |
| 3 | 0.4455 | 0.7918 | 0.4857 | 0.7465 | 0.000600 |
| 4 | 0.4391 | 0.7945 | 0.4634 | 0.7683 | 0.000800 |
| 5 | 0.4309 | 0.7989 | 0.4192 | 0.7904 | 0.001000 |
| 6 | 0.4169 | 0.8078 | 0.4200 | 0.7889 | 0.001000 |
| 7 | 0.3996 | 0.8167 | 0.3879 | 0.8149 | 0.000999 |
| 8 | 0.3911 | 0.8222 | 0.4468 | 0.7795 | 0.000996 |
| 9 | 0.3830 | 0.8264 | 0.3944 | 0.8103 | 0.000991 |
| 10 | 0.3759 | 0.8305 | 0.4086 | 0.7977 | 0.000984 |
| 11 | 0.3678 | 0.8353 | 0.4321 | 0.7857 | 0.000976 |
| 12 | 0.3621 | 0.8377 | 0.4163 | 0.8022 | 0.000965 |
| 13 | 0.3574 | 0.8411 | 0.3871 | 0.8170 | 0.000952 |
| 14 | 0.3543 | 0.8418 | 0.5018 | 0.7661 | 0.000938 |
| 15 | 0.3490 | 0.8453 | 0.4141 | 0.8099 | 0.000922 |
| 16 | 0.3412 | 0.8499 | 0.3623 | 0.8295 | 0.000905 |
| 17 | 0.3336 | 0.8534 | 0.4005 | 0.8193 | 0.000885 |
| 18 | 0.3565 | 0.8418 | 0.3622 | 0.8311 | 0.000864 |
| 19 | 0.3494 | 0.8447 | 0.3668 | 0.8304 | 0.000842 |
| 20 | 0.3359 | 0.8521 | 0.4189 | 0.8000 | 0.000819 |
| 21 | 0.3355 | 0.8519 | 0.3609 | 0.8314 | 0.000794 |
| 22 | 0.3280 | 0.8566 | 0.3757 | 0.8241 | 0.000768 |
| 23 | 0.3203 | 0.8606 | 0.3917 | 0.8174 | 0.000741 |
| 24 | 0.3278 | 0.8571 | 0.3974 | 0.8180 | 0.000713 |
| 25 | 0.3109 | 0.8649 | 0.3669 | 0.8310 | 0.000684 |
| 26 | 0.3129 | 0.8649 | 0.3511 | 0.8390 | 0.000655 |
| 27 | 0.3080 | 0.8667 | 0.3574 | 0.8373 | 0.000624 |
| 28 | 0.3050 | 0.8686 | 0.3584 | 0.8398 | 0.000594 |
| 29 | nan | 0.8688 | nan | 0.8383 | 0.000563 |
| 30 | nan | 0.6657 | nan | 0.5005 | 0.000531 |
| 31 | nan | 0.5000 | nan | 0.5005 | 0.000500 |
| 32 | nan | 0.5000 | nan | 0.5005 | 0.000469 |
| 33 | nan | 0.5000 | nan | 0.5005 | 0.000437 |
| 34 | nan | 0.5000 | nan | 0.5005 | 0.000406 |
| 35 | nan | 0.5000 | nan | 0.5005 | 0.000376 |
| 36 | nan | 0.5000 | nan | 0.5005 | 0.000345 |
| 37 | nan | 0.5000 | nan | 0.5005 | 0.000316 |
| 38 | nan | 0.5000 | nan | 0.5005 | 0.000287 |
|
mlx-community/Qwen3-235B-A22B-4bit | mlx-community | 2025-04-29T02:21:32Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-235B-A22B",
"base_model:quantized:Qwen/Qwen3-235B-A22B",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-04-29T00:51:01Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-235B-A22B
---
# mlx-community/Qwen3-235B-A22B-4bit
This model [mlx-community/Qwen3-235B-A22B-4bit](https://huggingface.co/mlx-community/Qwen3-235B-A22B-4bit) was
converted to MLX format from [Qwen/Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-235B-A22B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rafsanfalle/sdvfdv | rafsanfalle | 2025-04-29T02:17:26Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-29T02:17:23Z | ---
license: bsd-2-clause
---
|
Subsets and Splits