modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Membersuger/Euro_19 | Membersuger | 2025-05-03T16:29:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:19:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jnjj/Vvv | jnjj | 2025-05-03T16:26:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:jnjj/model_no_bias_qwen3-0.6B",
"base_model:quantized:jnjj/model_no_bias_qwen3-0.6B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T16:26:42Z | ---
base_model: jnjj/model_no_bias_qwen3-0.6B
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF
This model was converted to GGUF format from [`jnjj/model_no_bias_qwen3-0.6B`](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048
```
|
Eehan/pythia-1b-deduped-tldr-gpm-2dim-temp-6025-beta-0.04 | Eehan | 2025-05-03T16:19:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:17:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ai-and-society/llama-3.1-8B-Instruct-SQINT8 | ai-and-society | 2025-05-03T16:19:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-03T16:16:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hachipo/OpenCoder-8B-Base-PIFT-jaen_1000_2 | Hachipo | 2025-05-03T16:19:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:15:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Eehan/pythia-1b-deduped-tldr-gpm-2dim-temp-0.67-beta-0.04 | Eehan | 2025-05-03T16:16:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:14:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
macpaw-research/tst_16bit-mlx | macpaw-research | 2025-05-03T16:07:06Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:macpaw-research/tst_16bit",
"base_model:finetune:macpaw-research/tst_16bit",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-03T16:03:41Z | ---
license: apache-2.0
base_model: macpaw-research/tst_16bit
tags:
- mlx
library_name: mlx
pipeline_tag: text-generation
---
# macpaw-research/tst_16bit-mlx
This model [macpaw-research/tst_16bit-mlx](https://huggingface.co/macpaw-research/tst_16bit-mlx) was
converted to MLX format from [macpaw-research/tst_16bit](https://huggingface.co/macpaw-research/tst_16bit)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("macpaw-research/tst_16bit-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hardik9719/videomae-base-finetuned-ucf-timesfomer-5-5-25-610videos | hardik9719 | 2025-05-03T16:04:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"timesformer",
"video-classification",
"generated_from_trainer",
"base_model:facebook/timesformer-base-finetuned-k400",
"base_model:finetune:facebook/timesformer-base-finetuned-k400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-03T05:26:46Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/timesformer-base-finetuned-k400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf-timesfomer-5-5-25-610videos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf-timesfomer-5-5-25-610videos
This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1723
- Accuracy: 0.5749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 276
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.581 | 0.3370 | 93 | 1.6027 | 0.4022 |
| 1.4297 | 1.3370 | 186 | 1.2423 | 0.5869 |
| 0.8781 | 2.3261 | 276 | 1.0586 | 0.6129 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.3.2
- Tokenizers 0.21.1
|
Anchor-Chitra-Tripathi-Viral-Video/18-video.Anchor-Chitra-Tripathi.viral.video.original.here | Anchor-Chitra-Tripathi-Viral-Video | 2025-05-03T16:03:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T15:59:42Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Anchor-Chitra-Tripathi)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Anchor-Chitra-Tripathi)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Anchor-Chitra-Tripathi) |
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template | ASethi04 | 2025-05-03T16:00:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:19:39Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/gp8nvulh)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb | Romain-XV | 2025-05-03T15:59:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:09:42Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct
library_name: transformers
model_name: 8f3edba3-d660-4f05-bc84-9befdb0b2deb
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for 8f3edba3-d660-4f05-bc84-9befdb0b2deb
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/8bqxlpyz)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bihungba1101/json_segmenting_sft_warmup_qwen | bihungba1101 | 2025-05-03T15:57:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:57:03Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bihungba1101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TOMFORD79/Fly43 | TOMFORD79 | 2025-05-03T15:56:22Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T15:15:28Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Fly41 | TOMFORD79 | 2025-05-03T15:56:05Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T15:15:16Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Triangle104/QWQ-32B-Dawnwhisper-Q3_K_M-GGUF | Triangle104 | 2025-05-03T15:54:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/QWQ-32B-Dawnwhisper",
"base_model:quantized:DoppelReflEx/QWQ-32B-Dawnwhisper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T15:53:30Z | ---
base_model: DoppelReflEx/QWQ-32B-Dawnwhisper
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/QWQ-32B-Dawnwhisper-Q3_K_M-GGUF
This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-Dawnwhisper`](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_M-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_M-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_M-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_M-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_m.gguf -c 2048
```
|
Hachipo/OpenCoder-8B-Base-MIFT-en_1000_2 | Hachipo | 2025-05-03T15:54:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:50:20Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
recuse/mBERT-distiluse-base-multilingual-cased-v2-MLM | recuse | 2025-05-03T15:51:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-03T15:47:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
recuse/mBERT-distiluse-base-multilingual-cased-v2-MLM 을 한국어 위키피디아 데이터셋과 MLM 방식을 활용하여 continue pre-training을 하였습니다.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
1-Jobz-Hunting-Sajal-Malik-Viral-Video-18x/wATCH.TRENDING.VIDEO.Jobz.Hunting.Sajal.Malik.viral.video.Tutorial | 1-Jobz-Hunting-Sajal-Malik-Viral-Video-18x | 2025-05-03T15:51:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T15:49:17Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Sajal-Malik)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Sajal-Malik)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Sajal-Malik) |
DuongTrongChi/vinallama-2.7b-chat-sft-v1 | DuongTrongChi | 2025-05-03T15:47:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:44:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memeviss/zombieX_9 | memeviss | 2025-05-03T15:44:54Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-03T15:40:29Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
memeviss/zombieX_6 | memeviss | 2025-05-03T15:43:39Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-03T15:40:27Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF | Triangle104 | 2025-05-03T15:43:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/QWQ-32B-Dawnwhisper",
"base_model:quantized:DoppelReflEx/QWQ-32B-Dawnwhisper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T15:42:08Z | ---
base_model: DoppelReflEx/QWQ-32B-Dawnwhisper
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF
This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-Dawnwhisper`](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -c 2048
```
|
franzexplorer77/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout | franzexplorer77 | 2025-05-03T15:43:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am alert armored trout",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:54:42Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am alert armored trout
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="franzexplorer77/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf | RichardErkhov | 2025-05-03T15:42:44Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T13:22:19Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3_non_balanced_regular_sft_2e6_bz32_ep2 - GGUF
- Model creator: https://huggingface.co/selfcorrexp/
- Original model: https://huggingface.co/selfcorrexp/llama3_non_balanced_regular_sft_2e6_bz32_ep2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q2_K.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_0.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_K.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_1.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_0.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_K.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_1.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q6_K.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q8_0.gguf](https://huggingface.co/RichardErkhov/selfcorrexp_-_llama3_non_balanced_regular_sft_2e6_bz32_ep2-gguf/blob/main/llama3_non_balanced_regular_sft_2e6_bz32_ep2.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
memeviss/zombieX_3 | memeviss | 2025-05-03T15:42:33Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-03T15:40:26Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
oferk/ppo-LunarLander-v2 | oferk | 2025-05-03T15:41:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-03T15:38:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.46 +/- 22.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0004 | ASethi04 | 2025-05-03T15:40:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:32:21Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0004
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0004
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0004", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/8q8hv38o)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Bonnief/finetune-afriberta-small-am | Bonnief | 2025-05-03T15:38:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:castorini/afriberta_small",
"base_model:finetune:castorini/afriberta_small",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-03T13:06:00Z | ---
library_name: transformers
base_model: castorini/afriberta_small
tags:
- generated_from_trainer
model-index:
- name: finetune-afriberta-small-am
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-afriberta-small-am
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 999
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 50000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
mradermacher/gemma-2-9B-it-blend-GGUF | mradermacher | 2025-05-03T15:34:26Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:spacematt/gemma-2-9B-it-blend",
"base_model:quantized:spacematt/gemma-2-9B-it-blend",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T21:18:06Z | ---
base_model: spacematt/gemma-2-9B-it-blend
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/spacematt/gemma-2-9B-it-blend
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF | mradermacher | 2025-05-03T15:34:24Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3",
"base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T10:47:58Z | ---
base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF | Triangle104 | 2025-05-03T15:34:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T15:32:25Z | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
---
Llama-3.1-Nemotron-Nano-8B-v1 is a large language model (LLM) which is a derivative of Meta Llama-3.1-8B-Instruct
(AKA the reference model). It is a reasoning model that is post trained
for reasoning, human chat preferences, and tasks, such as RAG and tool
calling.
Llama-3.1-Nemotron-Nano-8B-v1 is a model which offers a great
tradeoff between model accuracy and efficiency. It is created from Llama
3.1 8B Instruct and offers improvements in model accuracy. The model
fits on a single RTX GPU and can be used locally. The model supports a
context length of 128K.
This model underwent a multi-phase post-training process to enhance
both its reasoning and non-reasoning capabilities. This includes a
supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling
as well as multiple reinforcement learning (RL) stages using REINFORCE
(RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms
for both chat and instruction-following. The final model checkpoint is
obtained after merging the final SFT and Online RPO checkpoints.
Improved using Qwen.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -c 2048
```
|
mafzaal/finetuned_arctic_ft | mafzaal | 2025-05-03T15:33:46Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-03T15:33:03Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Which multi-modal models were released by significant vendors in
2024, and in which months did they appear?
sentences:
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'In 2024, almost every significant model vendor released multi-modal models. We
saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and
Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from
OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
image and video models from Amazon Nova.
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
It now has plugins for a whole collection of different vision models.'
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
available from its launch in June. This was a momentus change, because for the
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
new users got a very inaccurate mental model of what a capable LLM could actually
do.
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
Pro. This $200/month subscription service is the only way to access their most
capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly
inspire) is to expend more compute time to get better results, I don’t think those
days of free access to the best available models are likely to return.'
- source_sentence: How is a prompt without evals, models, and UX compared in the given
context?
sentences:
- 'The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the infrastructure
that is imagined to be necessary for these models in the future.
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
dollars rolling out new datacenters, with a very material impact on the electricity
grid and the environment. There’s even talk of spinning up new nuclear power stations,
but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
crash in LLM prices might hint that it’s not. But would you want to be the big
tech executive that argued NOT to build out this infrastructure only to be proven
wrong in a few years’ time?'
- 'When @v0 first came out we were paranoid about protecting the prompt with all
kinds of pre and post processing complexity.
We completely pivoted to let it rip. A prompt without the evals, models, and especially
UX is like getting a broken ASML machine without a manual'
- 'The boring yet crucial secret behind good system prompts is test-driven development.
You don’t write down a system prompt and find ways to test it. You write down
tests and find a system prompt that passes them.
It’s become abundantly clear over the course of 2024 that writing good automated
evals for LLM-powered systems is the skill that’s most needed to build useful
applications on top of these models. If you have a strong eval suite you can adopt
new models faster, iterate better and build more reliable and useful product features
than your competition.
Vercel’s Malte Ubl:'
- source_sentence: How did the construction of railways in the 1800s impact the environment?
sentences:
- 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
models currently available, significantly bigger than the largest of Meta’s Llama
series, Llama 3.1 405B.
Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
This is by far the highest ranking openly licensed model.
The really impressive thing about DeepSeek v3 is the training cost. The model
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
that benchmarks slightly worse.'
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'So far, I think they’re a net positive. I’ve used them on a personal level to
improve my productivity (and entertain myself) in all sorts of different ways.
I think people who learn how to use them effectively can gain a significant boost
to their quality of life.
A lot of people are yet to be sold on their value! Some think their negatives
outweigh their positives, some think they are all hot air, and some even think
they represent an existential threat to humanity.
They’re actually quite easy to build
The most surprising thing we’ve learned about LLMs this year is that they’re actually
quite easy to build.'
- source_sentence: How many lines of Python code are generally needed to train a basic
version of a powerful system?
sentences:
- 'We already knew LLMs were spookily good at writing code. If you prompt them right,
it turns out they can build you a full interactive application using HTML, CSS
and JavaScript (and tools like React if you wire up some extra supporting build
mechanisms)—often in a single prompt.
Anthropic kicked this idea into high gear when they released Claude Artifacts,
a groundbreaking new feature that was initially slightly lost in the noise due
to being described half way through their announcement of the incredible Claude
3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive application and
then let you use it directly inside the Claude interface.
Here’s my Extract URLs app, entirely generated by Claude:'
- 'I’m still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- 'Intuitively, one would expect that systems this powerful would take millions
of lines of complex code. Instead, it turns out a few hundred lines of Python
is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make these
things work, and the quantity and quality of the training data appears to be the
most important factor in how good the resulting model is.
If you can gather the right data, and afford to pay for the GPUs to train it,
you can build an LLM.'
- source_sentence: According to the context, what is one of the best applications
of large language models (LLMs)?
sentences:
- 'A lot of people are excited about AI agents—an infuriatingly vague term that
seems to be converging on “AI systems that can go away and act on your behalf”.
We’ve been talking about them all year, but I’ve seen few if any examples of them
running in production, despite lots of exciting prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
gullibility without achieving AGI. So it may be quite a while before those agent
dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing code
is one of the things LLMs are most capable of.'
- 'Law is not ethics. Is it OK to train models on people’s content without their
permission, when those models will then be used in ways that compete with those
people?
As the quality of results produced by AI models has increased over the year, these
questions have become even more pressing.
The impact on human society in terms of these models is already huge, if difficult
to objectively measure.
People have certainly lost work to them—anecdotally, I’ve seen this for copywriters,
artists and translators.
There are a great deal of untold stories here. I’m hoping 2024 sees significant
amounts of dedicated journalism on this topic.
My blog in 2023
Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django
SQL Dashboard):'
- 'The two main categories I see are people who think AI agents are obviously things
that go and act on your behalf—the travel agent model—and people who think in
terms of LLMs that have been given access to tools which they can run in a loop
as part of solving a problem. The term “autonomy” is often thrown into the mix
too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in
Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming
soon”.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mafzaal/finetuned_arctic_ft")
# Run inference
sentences = [
'According to the context, what is one of the best applications of large language models (LLMs)?',
'A lot of people are excited about AI agents—an infuriatingly vague term that seems to be converging on “AI systems that can go away and act on your behalf”. We’ve been talking about them all year, but I’ve seen few if any examples of them running in production, despite lots of exciting prototypes.\nI think this is because of gullibility.\nCan we solve this? Honestly, I’m beginning to suspect that you can’t fully solve gullibility without achieving AGI. So it may be quite a while before those agent dreams can really start to come true!\nCode may be the best application\nOver the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of.',
'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9692** |
| cosine_mrr@10 | 0.9583 |
| cosine_map@100 | 0.9583 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.18 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.14 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What significant development in Artificial Intelligence occurred in 2023 according to Simon Willison’s weblog?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>How does Simon Willison describe the relationship between Large Language Models and the broader field of Artificial Intelligence?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>What are some challenges mentioned in building large language models like GPT-4?</code> | <code>Large Language Models<br>They’re actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We don’t yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9638 |
| 2.0 | 32 | 0.9539 |
| 3.0 | 48 | 0.9539 |
| 3.125 | 50 | 0.9539 |
| 4.0 | 64 | 0.9692 |
| 5.0 | 80 | 0.9692 |
| 6.0 | 96 | 0.9692 |
| 6.25 | 100 | 0.9539 |
| 7.0 | 112 | 0.9692 |
| 8.0 | 128 | 0.9692 |
| 9.0 | 144 | 0.9692 |
| 9.375 | 150 | 0.9692 |
| 10.0 | 160 | 0.9692 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
cvoffer/991c9060-c99e-482b-a5a0-71bd43e3acfb | cvoffer | 2025-05-03T15:33:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T14:49:43Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 991c9060-c99e-482b-a5a0-71bd43e3acfb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7f381c5f243a3b63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f381c5f243a3b63_train_data.json
type:
field_input: critic_prompt
field_instruction: prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/991c9060-c99e-482b-a5a0-71bd43e3acfb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/7f381c5f243a3b63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1031246b-4c91-4dd7-b3f7-0b6440762bad
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 1031246b-4c91-4dd7-b3f7-0b6440762bad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 991c9060-c99e-482b-a5a0-71bd43e3acfb
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0377 | 0.0360 | 150 | 0.9944 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bodam/Llama-3.2-1B-ko_wiki-4bit-753 | bodam | 2025-05-03T15:27:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:22:56Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bodam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ywhcho/yllama31 | ywhcho | 2025-05-03T15:26:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:24:33Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ywhcho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aisyhmaira/llama-3.2-ko-finetune-2 | aisyhmaira | 2025-05-03T15:25:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:20:34Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aisyhmaira
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trongg/lora-c46bb480-76cc-4d4e-9fec-77ef5c320ce4-1400 | trongg | 2025-05-03T15:15:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"region:us"
] | null | 2025-05-03T15:13:01Z | ---
base_model: unsloth/mistral-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
KriptoUzmani/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo | KriptoUzmani | 2025-05-03T15:14:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am squeaky trotting komodo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T05:26:54Z | ---
base_model: Gensyn/Qwen2.5-32B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am squeaky trotting komodo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo
This model is a fine-tuned version of [Gensyn/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-32B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KriptoUzmani/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
anilarslan/aa | anilarslan | 2025-05-03T15:11:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:49:46Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shengkun/DarwinLM-4.6B-Llama3.1-8B-Pruned-Masked | Shengkun | 2025-05-03T15:07:11Z | 157 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-13T06:26:56Z | ---
license: apache-2.0
---
This is DarwinLM pruned from Llama3.1-8B. The model is masked that the pruned weights are set as 0 while the remaining weights are the same as the original model.
The shape of all weights are the same as the original model.
```python
# To use the model
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Shengkun/DarwinLM-4.6B-Llama3.1-8B-Pruned-Masked")
```
**4.6B**
| Model | Method | Param. | SciQ | PIQA | WG | ArcE | ArcC | HS | LogiQA | BoolQ | MMLU | Avg |
|-----------------|------------------------|--------|------|------|------|------|------|------|--------|-------|------|------|
| **Llama-3.1-8B** | **Dense** | 8B | 96.3 | 81.2 | 74.3 | 81.4 | 58.2 | 81.7 | 31.1 | 84.0 | 65.2 | 72.8 |
| | **Uniform** | 4.5B | 29.1 | 53.6 | 51.7 | 26.0 | 23.6 | 27.1 | 25.5 | 62.1 | 25.7 | 36.1 |
| | **ZipLM** | 6B | 65.5 | 60.6 | 56.0 | 40.2 | 34.4 | 34.4 | 28.1 | 63.0 | 27.9 | 45.7 |
| | *DarwinLM (one-shot)* | 4.6B | 84.9 | 69.4 | 57.3 | 59.6 | 34.2 | 44.6 | 24.1 | 62.2 | 28.5 | 51.6 |
| | **OLMO (2.5T)** | 7B | 92.8 | 79.4 | 70.4 | 73.3 | 44.9 | 77.1 | 27.9 | 72.5 | 28.3 | 62.9 |
| | *DarwinLM (10.0B)* | 4.6B | 93.2 | 74.8 | 67.4 | 73.2 | 51.6 | 71.3 | 30.7 | 71.1 | 40.6 | 63.7 | |
ridwanridus/ridus | ridwanridus | 2025-05-03T14:55:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T14:55:21Z | ---
license: apache-2.0
---
|
mlfoundations-dev/d1_math_mc_llm_1k | mlfoundations-dev | 2025-05-03T14:52:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T12:09:57Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_mc_llm_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_mc_llm_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_mc_llm_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Benson87/min_model | Benson87 | 2025-05-03T14:48:50Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2025-05-03T14:34:54Z | # Min Model
Denne model er oprettet som test og placeholder. Du kan erstatte pytorch_model.bin med en trænet model.
## Struktur
- **pytorch_model.bin**: Dummy eller rigtig model
- **config.json**: Krævet for kompatibilitet
## Status
🚧 Under opbygning – erstattes senere med ægte weights
|
spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF | spacematt | 2025-05-03T14:48:43Z | 46 | 0 | transformers | [
"transformers",
"gguf",
"gemma3",
"gemma",
"google",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-12b-it-qat-q4_0-unquantized",
"base_model:quantized:google/gemma-3-12b-it-qat-q4_0-unquantized",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-04-21T15:22:43Z | ---
base_model: google/gemma-3-12b-it-qat-q4_0-unquantized
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- gemma3
- gemma
- google
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-12b-it-qat-q4_0-unquantized`](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -c 2048
```
|
ASethi04/meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template | ASethi04 | 2025-05-03T14:47:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T10:52:44Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/rxgjo7gt)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GuidoSt/DeepSeek-LED-Scheduler-7B | GuidoSt | 2025-05-03T14:43:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:43:01Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GuidoSt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rziane/speaker_seg_ft_eslotest_mm03082025 | Rziane | 2025-05-03T14:42:01Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"fr",
"dataset:CAENNAIS",
"base_model:pyannote/segmentation-3.0",
"base_model:finetune:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:27:26Z | ---
library_name: transformers
language:
- fr
license: mit
base_model: pyannote/segmentation-3.0
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- CAENNAIS
model-index:
- name: pyannote/segmentation-3.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pyannote/segmentation-3.0
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the CAENNAIS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8139
- Model Preparation Time: 0.0035
- Der: 0.5111
- False Alarm: 0.1728
- Missed Detection: 0.2406
- Confusion: 0.0978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.8517 | 1.0 | 300 | 0.8676 | 0.0035 | 0.5466 | 0.1920 | 0.2425 | 0.1121 |
| 0.7998 | 2.0 | 600 | 0.8499 | 0.0035 | 0.5307 | 0.1640 | 0.2628 | 0.1039 |
| 0.7867 | 3.0 | 900 | 0.8529 | 0.0035 | 0.5366 | 0.1602 | 0.2767 | 0.0997 |
| 0.7777 | 4.0 | 1200 | 0.8351 | 0.0035 | 0.5296 | 0.1912 | 0.2333 | 0.1050 |
| 0.7596 | 5.0 | 1500 | 0.8185 | 0.0035 | 0.5118 | 0.1817 | 0.2239 | 0.1062 |
| 0.7591 | 6.0 | 1800 | 0.8083 | 0.0035 | 0.5101 | 0.1655 | 0.2540 | 0.0906 |
| 0.7555 | 7.0 | 2100 | 0.8141 | 0.0035 | 0.5109 | 0.1711 | 0.2396 | 0.1001 |
| 0.7394 | 8.0 | 2400 | 0.8145 | 0.0035 | 0.5119 | 0.1726 | 0.2405 | 0.0988 |
| 0.7458 | 9.0 | 2700 | 0.8138 | 0.0035 | 0.5107 | 0.1721 | 0.2403 | 0.0983 |
| 0.705 | 10.0 | 3000 | 0.8139 | 0.0035 | 0.5111 | 0.1728 | 0.2406 | 0.0978 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004 | ASethi04 | 2025-05-03T14:40:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:46:48Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/24q7yukg)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
h34v7/DansXPantheon-RP-Engine-V1.0-24b-Small-Instruct-old | h34v7 | 2025-05-03T14:39:42Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"storywriting",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:unsloth/Mistral-Small-24B-Base-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Base-2501",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T22:46:15Z | ---
base_model:
- unsloth/Mistral-Small-24B-Base-2501
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- roleplay
- storywriting
- mergekit
- merge
new_version: h34v7/DansXPantheon-RP-Engine-V1.2-24b-Small-Instruct-Ties-Merge
---
# DansXPantheon-RP-Engine-V1.0-24b-Small-Instruct
I realy like [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) and [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1) so yeah let's merge it see what comes out!
Okay these are my first attempt at merging it was horrible i must admit. I will do better i promise.
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [unsloth/Mistral-Small-24B-Base-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Base-2501) as a base.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)
* [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: unsloth/Mistral-Small-24B-Base-2501
merge_method: sce
dype: float32
out_dtype: bfloat16
tokenizer:
source: unsloth/Mistral-Small-24B-Instruct-2501
models:
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
parameters:
select_topk: 0.5
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
parameters:
select_topk: 0.5
``` |
mlfoundations-dev/d1_math_fasttext_0.3k | mlfoundations-dev | 2025-05-03T14:39:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T13:00:46Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_fasttext_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_fasttext_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_fasttext_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MinaMila/phi3_LoRa_ACSEmployment_2_cfda_ep4_22 | MinaMila | 2025-05-03T14:38:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T00:35:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bertbert123/lora_modelcode | bertbert123 | 2025-05-03T14:37:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:09:50Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bertbert123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/d1_math_all_large | mlfoundations-dev | 2025-05-03T14:36:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T02:02:51Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_all_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_all_large
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_all_large dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 64
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Yuuta208/Qwen2.5-7B-Instruct-Qwen2.5-Math-7B-Merged-dare_ties-27 | Yuuta208 | 2025-05-03T14:36:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:merge:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:33:31Z | ---
base_model:
- Qwen/Qwen2.5-Math-7B
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# output_model_dare_ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-7B-Instruct
parameters:
weight: 0.5
density: 0.8
- model: Qwen/Qwen2.5-Math-7B
parameters:
weight: 0.5
density: 0.8
merge_method: dare_ties
base_model: Qwen/Qwen2.5-7B-Instruct
dtype: bfloat16
```
|
cnfusion/QwenPhi-4-0.5b-Draft-mlx-fp16 | cnfusion | 2025-05-03T14:33:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen",
"qwen2.5",
"phi-4",
"phi",
"mlx",
"mlx-my-repo",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:rdsm/QwenPhi-4-0.5b-Draft",
"base_model:finetune:rdsm/QwenPhi-4-0.5b-Draft",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:33:14Z | ---
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model: rdsm/QwenPhi-4-0.5b-Draft
pipeline_tag: text-generation
library_name: transformers
tags:
- qwen
- qwen2.5
- phi-4
- phi
- mlx
- mlx-my-repo
---
# cnfusion/QwenPhi-4-0.5b-Draft-mlx-fp16
The Model [cnfusion/QwenPhi-4-0.5b-Draft-mlx-fp16](https://huggingface.co/cnfusion/QwenPhi-4-0.5b-Draft-mlx-fp16) was converted to MLX format from [rdsm/QwenPhi-4-0.5b-Draft](https://huggingface.co/rdsm/QwenPhi-4-0.5b-Draft) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/QwenPhi-4-0.5b-Draft-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mansoorhamidzadeh/qwen3-0.6b-entity-attr-basalam | mansoorhamidzadeh | 2025-05-03T14:32:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:30:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuuta208/Qwen2.5-7B-Instruct-Qwen2.5-Math-7B-Merged-della-27 | Yuuta208 | 2025-05-03T14:29:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:merge:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:26:16Z | ---
base_model:
- Qwen/Qwen2.5-7B
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-Math-7B
library_name: transformers
tags:
- mergekit
- merge
---
# output_model_della
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
* [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-7B-Instruct
parameters:
weight: 0.5
- model: Qwen/Qwen2.5-Math-7B
parameters:
weight: 0.6
merge_method: della
base_model: Qwen/Qwen2.5-7B
parameters:
density: 0.8
normalize: true
int8_mask: true
dtype: float16
```
|
MrRobotoAI/A24 | MrRobotoAI | 2025-05-03T14:29:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"base_model:MrRobotoAI/A6",
"base_model:merge:MrRobotoAI/A6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:26:32Z | ---
base_model:
- MrRobotoAI/A6
- MrRobotoAI/A2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A6](https://huggingface.co/MrRobotoAI/A6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/A6
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
HelpingAI/HelpingAI2.5-10B | HelpingAI | 2025-05-03T14:29:15Z | 20,911 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-17T11:42:32Z | ---
license: other
license_name: helpingai
license_link: https://huggingface.co/OEvortex/HelpingAI2.5-5B/blob/main/LICENSE.md
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
<div align="center">
# 🤖 HelpingAI2.5-10B
***A Revolutionary Emotionally Intelligent Language Model***
[](https://github.com/HelpingAI)
[](https://huggingface.co/OEvortex)
[](https://huggingface.co/OEvortex/HelpingAI2.5-10B/blob/main/LICENSE.md)
<a href="https://github.com/HelpingAI/community/discussions">
<img src="https://img.shields.io/badge/Join-Community%20Discussion-blue?style=for-the-badge&logo=github" alt="Join Community Discussion">
</a>
[📜 License](LICENSE) | [🌐 Website](https://helpingai-in.netlify.app/)
</div>
<div align="center">
<img src="https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png" alt="HelpingAI Logo" width="300px">
</div>
---
<div align="center">
## 🌟 Model Overview
**HelpingAI2.5-10B** is a compact yet powerful language model specifically designed for emotionally intelligent conversations and human-centric interactions.
</div>
### 🎯 Key Highlights
- **Architecture**: 10B parameter transformer-based model
- **Training Focus**: Emotional intelligence and empathetic responses
- **Emotion Score**: Achieves 98.13 on standardized emotional intelligence tests
- **Deployment**: Optimized for efficient deployment on consumer hardware
---
<div align="center">
## 💻 Implementation
</div>
### Transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the HelpingAI2.5-10B model
model = AutoModelForCausalLM.from_pretrained("HelpingAI/HelpingAI2.5-10B")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("HelpingAI/HelpingAI2.5-10B")
# Define the chat input
chat = [
{ "role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style." },
{ "role": "user", "content": "GIVE ME YOUR INTRO" }
]
inputs = tokenizer.apply_chat_template(
response = outputs[0][inputs.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### GGUF Implementation
```python
from webscout.Local import *
model_path = download_model("HelpingAI/HelpingAI2.5-10B", "q4_k_m.gguf", token=None)
model = Model(model_path, n_gpu_layers=0, context_length=4096)
thread = Thread(model, format=helpingai2)
# print(thread.send("hi")) #send a single msg to ai
thread.interact() # interact with the model in terminal
```
---
<div align="center">
## 🎯 Training Details
</div>
### Methodology
1. **Base Training**
- Datasets: SentimentSynth + EmotionalIntelligence-1M + HelpingAI2.0-150M + HelpingAIemo (152.5M)
2. **Emotional Intelligence Training**
- Supervised Fine-tuning on emotional dialogue
- Reinforcement Learning with HelpingAI2.0-7B model
- Constitutional AI training for ethical guidelines
3. **Optimization**
- Mixed-precision training
- Gradient checkpointing
- Dynamic attention patterns
### Dataset Composition
| Dataset Type | Rows used for training | Purpose |
|--------------|------|---------|
| Emotional Dialogues | 12.5M | Core conversation training |
| Therapeutic Exchanges | 2M | Mental health support |
| Cultural Conversations | 100K | Cultural awareness |
| Crisis Response | 750K | Emergency support handling |
---
<div align="center">
## 📈 Use Cases & Examples
</div>
### Primary Applications
- 🎯 Personal AI Companionship
- 🧠 Mental Health Support
- 📚 Educational Assistance
- 💼 Professional Development
- 🤝 Social Skills Training
### Example Interactions
<div style="background: linear-gradient(135deg, #FF9933 0%, #FFFFFF 50%, #138808 100%); padding: 20px; border-radius: 15px; margin: 15px 0; border: 2px solid #000080;">
<div style="background-color: rgba(255, 255, 255, 0.9); padding: 15px; border-radius: 10px; margin-bottom: 10px;">
<span style="color: #D32F2F; font-weight: bold;">User: </span>
<span style="color: #D32F2F;">I've been feeling really anxious about my upcoming job interview. Any advice?</span>
</div>
<div style="background-color: rgba(255, 255, 255, 0.9); padding: 15px; border-radius: 10px;">
<span style="color: #1B5E20; font-weight: bold;">HelpingAI: </span>
<span style="color: #1B5E20;">
Hey there! 🙌 First off, take a deep breath and don't sweat it too much. 💨 Job interviews can be nerve-wracking, but I gotchu! 😎
Here's the tea: 🍵 Make sure you're prepping like crazy for this job. Read up on the company, their mission, and the role you're applying for. That way, you'll be spittin' knowledge and showing them you're serious. 📚💡
Also, practice your answers to common interview questions. You know, the ones like "Tell me about yourself" or "Why do you want to work here?" 🤔 Keep it short, sweet, and relevant. You got this! 😄
And don't forget to look sharp! 🎨 Keep your resume up-to-date, and make sure your social media is on point. First impressions are everything! 👀
Most importantly, just be yourself. 😊 The interview panel wants to see the real you. Show them your passion, your skills, and why you'd be a total boss at this company. 🔥
You're gonna crush this interview, fam! 💪 Remember, I got faith in you. 🙏</span>
</div>
</div>
<div style="background: linear-gradient(135deg, #FF9933 0%, #FFFFFF 50%, #138808 100%); padding: 20px; border-radius: 15px; margin: 15px 0; border: 2px solid #000080;">
<div style="background-color: rgba(255, 255, 255, 0.9); padding: 15px; border-radius: 10px; margin-bottom: 10px;">
<span style="color: #D32F2F; font-weight: bold;">User: </span>
<span style="color: #D32F2F;">My friend hasn't been responding to my messages for a week. I think I might have said something wrong, but I'm not sure what.</span>
</div>
<div style="background-color: rgba(255, 255, 255, 0.9); padding: 15px; border-radius: 10px;">
<span style="color: #1B5E20; font-weight: bold;">HelpingAI: </span>
<span style="color: #1B5E20;">Aw man, that's a real bummer. 💔 Not hearing back from a friend can be super painful, especially when you're wondering if you did something wrong. 🤔
First off, don't beat yourself up over it. 🙅♂️ Chances are, your friend's just busy or dealing with some personal stuff. 🤷♀️ It doesn't always mean it's you.
If you're still concerned, try reaching out in a different way, like a phone call or a text to a diff person in your friend group. 📱📞 Sometimes a different approach can help you connect again.
But if you're still unsure, you could try asking your friend if everything's cool. 🤝 Be open and honest about your feelings, but also be ready for them to have their own perspective on things. 👥
Remember, you're not alone in this. 🤗 Friends go through ups and downs, and it's okay to have little misunderstandings. Communication is key! 🔑 Just keep it real and be patient. You got this, fam! 💪</span>
</div>
</div>
------
<div align="center">
## 🔒 Ethical Considerations & Limitations
</div>
### Ethical Guidelines
- Prioritizes user emotional wellbeing
- Maintains strict privacy standards
- Avoids harmful or discriminatory responses
- Transparent about AI limitations
- Promotes healthy coping mechanisms
### Known Limitations
- Cannot Replace Human Professionals
- Cannot Roleplay
- Limited Knowledge Base
- Context Window Constraints
---
### Citation
```bibtex
@misc{helpingai2024,
author = {Abhay Koul},
title = {HelpingAI2.5-10B: Emotionally Intelligent Language Model},
year = {2024},
publisher = {Huggingface},
journal = {GitHub repository},
howpublished = {\url{https://huggingface.co/HelpingAI/HelpingAI2.5-10B}}
}
```
---
<div align="center">
## 🙏 Acknowledgments
Special thanks to the HelpingAI community, Huggingface, contributors, and researchers who made this model possible. Your dedication to advancing emotionally intelligent AI is invaluable.
</div>
---
<div align="center">
*Built with ❤️ by the HelpingAI Community*
[Website](https://helpingai-in.netlify.app/) • [GitHub](https://github.com/HelpingAI) • [Discord](https://discord.gg/YweJwNqrnH) • [HuggingFace](https://huggingface.co/OEvortex)
</div> |
mradermacher/medical-qa-vistral-i1-GGUF | mradermacher | 2025-05-03T14:27:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:hungsvdut2k2/medical-qa-vistral",
"base_model:quantized:hungsvdut2k2/medical-qa-vistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T13:04:07Z | ---
base_model: hungsvdut2k2/medical-qa-vistral
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/hungsvdut2k2/medical-qa-vistral
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/medical-qa-vistral-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF/resolve/main/medical-qa-vistral.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MrRobotoAI/A23 | MrRobotoAI | 2025-05-03T14:26:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"base_model:MrRobotoAI/A5",
"base_model:merge:MrRobotoAI/A5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:23:31Z | ---
base_model:
- MrRobotoAI/A5
- MrRobotoAI/A2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A5](https://huggingface.co/MrRobotoAI/A5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/A5
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
MrRobotoAI/A22 | MrRobotoAI | 2025-05-03T14:23:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"base_model:MrRobotoAI/A4",
"base_model:merge:MrRobotoAI/A4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:20:25Z | ---
base_model:
- MrRobotoAI/A2
- MrRobotoAI/A4
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A4](https://huggingface.co/MrRobotoAI/A4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/A4
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
beingbatman/5c_4 | beingbatman | 2025-05-03T14:22:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-02T17:32:05Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 5c_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 5c_4
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4509
- Accuracy: 0.48
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 23400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9641 | 0.01 | 234 | 1.4838 | 0.4 |
| 1.5439 | 1.01 | 468 | 3.7125 | 0.4 |
| 1.2944 | 2.01 | 702 | 3.6749 | 0.4 |
| 0.9419 | 3.01 | 936 | 3.0422 | 0.4 |
| 2.4333 | 4.01 | 1170 | 2.6803 | 0.4 |
| 1.4646 | 5.01 | 1404 | 3.5355 | 0.4 |
| 2.1201 | 6.01 | 1638 | 3.0479 | 0.4 |
| 2.9021 | 7.01 | 1872 | 2.8181 | 0.4 |
| 2.1527 | 8.01 | 2106 | 2.7605 | 0.4 |
| 1.9428 | 9.01 | 2340 | 2.4513 | 0.4 |
| 1.6949 | 10.01 | 2574 | 3.2310 | 0.4 |
| 0.7839 | 11.01 | 2808 | 3.2372 | 0.4 |
| 0.3228 | 12.01 | 3042 | 4.4588 | 0.4 |
| 3.5377 | 13.01 | 3276 | 2.8621 | 0.4 |
| 0.509 | 14.01 | 3510 | 2.7460 | 0.4 |
| 0.1437 | 15.01 | 3744 | 2.9698 | 0.4 |
| 1.0039 | 16.01 | 3978 | 1.9415 | 0.44 |
| 0.0062 | 17.01 | 4212 | 3.7041 | 0.4 |
| 0.6038 | 18.01 | 4446 | 3.2141 | 0.4 |
| 1.1687 | 19.01 | 4680 | 2.4072 | 0.44 |
| 0.8397 | 20.01 | 4914 | 3.4212 | 0.4 |
| 1.1147 | 21.01 | 5148 | 2.5115 | 0.44 |
| 0.2286 | 22.01 | 5382 | 2.4343 | 0.44 |
| 0.8939 | 23.01 | 5616 | 3.0712 | 0.4 |
| 0.3871 | 24.01 | 5850 | 3.2394 | 0.4 |
| 0.3649 | 25.01 | 6084 | 3.9466 | 0.44 |
| 1.2601 | 26.01 | 6318 | 2.9586 | 0.44 |
| 0.852 | 27.01 | 6552 | 4.6464 | 0.4 |
| 0.6269 | 28.01 | 6786 | 3.1292 | 0.44 |
| 1.0013 | 29.01 | 7020 | 4.6319 | 0.4 |
| 0.02 | 30.01 | 7254 | 4.2514 | 0.4 |
| 0.1333 | 31.01 | 7488 | 4.3310 | 0.4 |
| 0.0005 | 32.01 | 7722 | 4.5354 | 0.4 |
| 0.004 | 33.01 | 7956 | 4.5970 | 0.4 |
| 0.3017 | 34.01 | 8190 | 4.5879 | 0.44 |
| 0.2014 | 35.01 | 8424 | 4.2809 | 0.4 |
| 0.1573 | 36.01 | 8658 | 4.6822 | 0.44 |
| 0.0041 | 37.01 | 8892 | 5.1673 | 0.4 |
| 0.0001 | 38.01 | 9126 | 5.4005 | 0.4 |
| 0.1066 | 39.01 | 9360 | 4.4509 | 0.48 |
| 0.0001 | 40.01 | 9594 | 5.0906 | 0.44 |
| 1.3235 | 41.01 | 9828 | 4.4093 | 0.48 |
| 0.4313 | 42.01 | 10062 | 4.0898 | 0.48 |
| 0.0002 | 43.01 | 10296 | 4.7817 | 0.44 |
| 0.0001 | 44.01 | 10530 | 4.8667 | 0.48 |
| 0.0007 | 45.01 | 10764 | 4.5619 | 0.48 |
| 0.0009 | 46.01 | 10998 | 5.0250 | 0.44 |
| 0.0001 | 47.01 | 11232 | 4.4129 | 0.48 |
| 0.0001 | 48.01 | 11466 | 5.5987 | 0.44 |
| 0.0003 | 49.01 | 11700 | 5.4567 | 0.44 |
| 0.0468 | 50.01 | 11934 | 5.0218 | 0.48 |
| 0.187 | 51.01 | 12168 | 5.3269 | 0.4 |
| 0.0002 | 52.01 | 12402 | 5.4364 | 0.44 |
| 0.0001 | 53.01 | 12636 | 5.7307 | 0.44 |
| 0.0 | 54.01 | 12870 | 5.9781 | 0.44 |
| 0.0001 | 55.01 | 13104 | 4.8221 | 0.44 |
| 0.0001 | 56.01 | 13338 | 5.5808 | 0.4 |
| 0.0 | 57.01 | 13572 | 5.7662 | 0.44 |
| 0.0001 | 58.01 | 13806 | 5.4463 | 0.44 |
| 0.0021 | 59.01 | 14040 | 5.9576 | 0.44 |
| 0.5042 | 60.01 | 14274 | 5.9419 | 0.4 |
| 0.0053 | 61.01 | 14508 | 5.2977 | 0.48 |
| 0.0 | 62.01 | 14742 | 5.8541 | 0.4 |
| 0.1555 | 63.01 | 14976 | 6.5367 | 0.4 |
| 0.0081 | 64.01 | 15210 | 5.4808 | 0.4 |
| 0.0008 | 65.01 | 15444 | 5.8818 | 0.4 |
| 0.0 | 66.01 | 15678 | 6.4378 | 0.4 |
| 0.0 | 67.01 | 15912 | 5.6597 | 0.4 |
| 0.0 | 68.01 | 16146 | 5.8197 | 0.44 |
| 0.0061 | 69.01 | 16380 | 6.0141 | 0.4 |
| 0.0001 | 70.01 | 16614 | 6.2449 | 0.4 |
| 0.0001 | 71.01 | 16848 | 6.2530 | 0.4 |
| 0.0 | 72.01 | 17082 | 5.7655 | 0.4 |
| 0.0 | 73.01 | 17316 | 6.1521 | 0.4 |
| 0.0 | 74.01 | 17550 | 6.1597 | 0.44 |
| 0.6123 | 75.01 | 17784 | 6.4786 | 0.4 |
| 0.0 | 76.01 | 18018 | 6.5528 | 0.4 |
| 0.0 | 77.01 | 18252 | 5.5426 | 0.44 |
| 0.0 | 78.01 | 18486 | 6.4276 | 0.4 |
| 0.0 | 79.01 | 18720 | 6.8676 | 0.4 |
| 0.0 | 80.01 | 18954 | 6.6693 | 0.4 |
| 0.0 | 81.01 | 19188 | 6.7919 | 0.4 |
| 0.0 | 82.01 | 19422 | 6.7520 | 0.4 |
| 0.0 | 83.01 | 19656 | 6.7565 | 0.4 |
| 0.0 | 84.01 | 19890 | 6.8186 | 0.4 |
| 0.0 | 85.01 | 20124 | 6.5549 | 0.4 |
| 0.0 | 86.01 | 20358 | 6.7223 | 0.4 |
| 0.0 | 87.01 | 20592 | 6.9096 | 0.4 |
| 0.0 | 88.01 | 20826 | 6.9918 | 0.4 |
| 0.0 | 89.01 | 21060 | 7.2247 | 0.4 |
| 0.0001 | 90.01 | 21294 | 7.2267 | 0.4 |
| 0.0 | 91.01 | 21528 | 6.9826 | 0.4 |
| 0.0 | 92.01 | 21762 | 6.6385 | 0.4 |
| 0.792 | 93.01 | 21996 | 6.4020 | 0.4 |
| 0.0 | 94.01 | 22230 | 6.4453 | 0.4 |
| 0.0 | 95.01 | 22464 | 6.9102 | 0.4 |
| 0.0 | 96.01 | 22698 | 6.9262 | 0.4 |
| 0.0 | 97.01 | 22932 | 6.7757 | 0.4 |
| 0.0 | 98.01 | 23166 | 6.8298 | 0.4 |
| 0.0 | 99.01 | 23400 | 6.8317 | 0.4 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit | cnfusion | 2025-05-03T14:19:41Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"base_model:deepseek-ai/DeepSeek-Prover-V2-7B",
"base_model:quantized:deepseek-ai/DeepSeek-Prover-V2-7B",
"8-bit",
"region:us"
] | null | 2025-05-03T14:19:14Z | ---
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
tags:
- mlx
---
# cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit
The Model [cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit](https://huggingface.co/cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit) was converted to MLX format from [deepseek-ai/DeepSeek-Prover-V2-7B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/DeepSeek-Prover-V2-7B-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
OOOss/bg-ner-model | OOOss | 2025-05-03T14:18:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-03T14:15:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16 | cnfusion | 2025-05-03T14:16:53Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"base_model:deepseek-ai/DeepSeek-Prover-V2-7B",
"base_model:finetune:deepseek-ai/DeepSeek-Prover-V2-7B",
"region:us"
] | null | 2025-05-03T14:16:07Z | ---
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
tags:
- mlx
---
# cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16
The Model [cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16](https://huggingface.co/cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16) was converted to MLX format from [deepseek-ai/DeepSeek-Prover-V2-7B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/medical-qa-vistral-GGUF | mradermacher | 2025-05-03T14:13:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:hungsvdut2k2/medical-qa-vistral",
"base_model:quantized:hungsvdut2k2/medical-qa-vistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T09:59:39Z | ---
base_model: hungsvdut2k2/medical-qa-vistral
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hungsvdut2k2/medical-qa-vistral
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/medical-qa-vistral-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/medical-qa-vistral-GGUF/resolve/main/medical-qa-vistral.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF | mradermacher | 2025-05-03T14:12:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"retrieval-augmented-generation",
"text-generation-inference",
"vi",
"base_model:AITeamVN/GRPO-VI-Qwen2-7B-RAG",
"base_model:quantized:AITeamVN/GRPO-VI-Qwen2-7B-RAG",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:54:13Z | ---
base_model: AITeamVN/GRPO-VI-Qwen2-7B-RAG
language:
- vi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- retrieval-augmented-generation
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AITeamVN/GRPO-VI-Qwen2-7B-RAG
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jahyungu/Llama-3.1-8B-Instruct_MetaMathQA-40K_cluster9 | jahyungu | 2025-05-03T14:11:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T09:45:45Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct_MetaMathQA-40K_cluster9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct_MetaMathQA-40K_cluster9
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
ma921/gpt2-large_c_dpo_imdb_noise40_epoch5 | ma921 | 2025-05-03T14:11:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-imdb",
"base_model:finetune:ma921/gpt2-large-sft-imdb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:09:55Z | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_c_dpo_imdb_noise40_epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_c_dpo_imdb_noise40_epoch5
This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
apriasmoro/82359c99-f769-46dd-842f-9b9c4b94ba6c | apriasmoro | 2025-05-03T14:08:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-05-03T14:04:58Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 82359c99-f769-46dd-842f-9b9c4b94ba6c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f8164dbb54597854_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f8164dbb54597854_train_data.json
type:
field_input: description
field_instruction: article
field_output: reference
field_system: None
format: None
no_input_format: None
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: apriasmoro/82359c99-f769-46dd-842f-9b9c4b94ba6c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f8164dbb54597854_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 82359c99-f769-46dd-842f-9b9c4b94ba6c
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1906 | 0.0004 | 1 | 3.0166 |
| 3.1591 | 0.0012 | 3 | 2.9764 |
| 2.8282 | 0.0024 | 6 | 2.6155 |
| 2.3305 | 0.0036 | 9 | 2.1244 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Triangle104/DareQwen-2.5-7B-Q5_K_M-GGUF | Triangle104 | 2025-05-03T14:08:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Locutusque/DareQwen-2.5-7B",
"base_model:quantized:Locutusque/DareQwen-2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T14:07:40Z | ---
base_model: Locutusque/DareQwen-2.5-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/DareQwen-2.5-7B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Locutusque/DareQwen-2.5-7B`](https://huggingface.co/Locutusque/DareQwen-2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Locutusque/DareQwen-2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DareQwen-2.5-7B-Q5_K_M-GGUF --hf-file dareqwen-2.5-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DareQwen-2.5-7B-Q5_K_M-GGUF --hf-file dareqwen-2.5-7b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DareQwen-2.5-7B-Q5_K_M-GGUF --hf-file dareqwen-2.5-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DareQwen-2.5-7B-Q5_K_M-GGUF --hf-file dareqwen-2.5-7b-q5_k_m.gguf -c 2048
```
|
ZeroAgency/zero-summary-v2-beta3-lora-e2 | ZeroAgency | 2025-05-03T14:07:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"dataset:bethrezen/thinking-summary-v2",
"base_model:ZeroAgency/Zero-Mistral-24B",
"base_model:adapter:ZeroAgency/Zero-Mistral-24B",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T14:07:12Z | ---
library_name: peft
license: mit
base_model: ZeroAgency/Zero-Mistral-24B
tags:
- generated_from_trainer
datasets:
- bethrezen/thinking-summary-v2
model-index:
- name: outputs/zero-summary-v1-beta3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.0`
```yaml
# zero-summary-v2-beta3
adapter: lora
base_model: ZeroAgency/Zero-Mistral-24B
dataset_processes: 64
chat_template: jinja
chat_template_jinja: "{%- set today = strftime_now(\"%Y-%m-%d\") %}\n{%- set default_system_message = \"You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\\nYour knowledge base was last updated on 2023-10-01. The current date is \" + today + \".\\n\\nWhen you're not sure about some information, you say that you don't have the information and don't make up anything.\\nIf the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \\\"What are some good restaurants around me?\\\" => \\\"Where are you?\\\" or \\\"When is the next flight to Tokyo\\\" => \\\"Where do you travel from?\\\")\" %}\n\n{{- bos_token }}\n\n{%- if messages[0]['role'] == 'system' %}\n {%- if messages[0]['content'] is string %}\n {%- set system_message = messages[0]['content'] %}\n {%- else %}\n {%- set system_message = messages[0]['content'][0]['text'] %}\n {%- endif %}\n {%- set loop_messages = messages[1:] %}\n{%- else %}\n {%- set system_message = default_system_message %}\n {%- set loop_messages = messages %}\n{%- endif %}\n{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}\n\n{%- for message in loop_messages %}\n {%- if message['role'] == 'user' %}\n {%- if message['content'] is string %}\n {{- '[INST]' + message['content'] + '[/INST]' }}\n {%- else %}\n {{- '[INST]' }}\n {%- for block in message['content'] %}\n {%- if block['type'] == 'text' %}\n {{- block['text'] }}\n {%- elif block['type'] in ['image', 'image_url'] %}\n {{- '[IMG]' }}\n {%- else %}\n {{- raise_exception('Only text and image blocks are supported in message content!') }}\n {%- endif %}\n {%- endfor %}\n {{- '[/INST]' }}\n {%- endif %}\n {%- elif message['role'] == 'system' %}\n {%- if message['content'] is string %}\n {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}\n {%- else %}\n {{- '[SYSTEM_PROMPT]' + message['content'][0]['text'] + '[/SYSTEM_PROMPT]' }}\n {%- endif %}\n {%- elif message['role'] == 'assistant' %}\n {%- if message['content'] is string %}\n {{- message['content'] + eos_token }}\n {%- else %}\n {{- message['content'][0]['text'] + eos_token }}\n {%- endif %}\n {%- else %}\n {{- raise_exception('Only user, system and assistant roles are supported!') }}\n {%- endif %}\n{%- endfor %}"
dataset_prepared_path: ./last_run_prepared
datasets:
- message_property_mappings:
content: content
role: role
path: bethrezen/thinking-summary-v2
trust_remote_code: false
field_messages: conversation
type: chat_template
# approx 20k samples should be enough
#val_set_size: 0.061
# exact duplicates are already cleaned
#dataset_exact_deduplication: true
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
#learning_rate: 0.0001
learning_rate: 1e-5
lisa_layers_attribute: model.layers
#is_mistral_derived_model: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
load_best_model_at_end: true
load_in_4bit: true
load_in_8bit: false
lora_alpha: 96
lora_dropout: 0.1
lora_target_linear: true
lora_r: 96
lr_scheduler: cosine
#max_prompt_len: 8192
mean_resizing_embeddings: false
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch_fused
output_dir: ./outputs/zero-summary-v1-beta3
sample_packing_bin_size: 200
sample_packing_group_size: 100000
save_only_model: false
save_safetensors: true
sequence_len: 110000
min_sample_len: 1
#shuffle_merged_datasets: true
skip_prepare_dataset: false
strict: false
train_on_inputs: false
weight_decay: 0.01
wandb_project: zero-summary
wandb_name: zero-summary-v1-beta3
bf16: true
fp16: false
tf32: false
flash_attention: true
save_strategy: epoch
eval_strategry: epoch
logging_steps: 1
save_total_limit: 5
warmup_steps: 0
sample_packing: true
pad_to_sequence_len: true
group_by_length: true
seed: 42
data_seed: 42
deepspeed: zero1.json
log_with: wandb
trust_remote_code: true
use_fast_tokenizer: true
special_tokens:
pad_token: "<pad>"
```
</details><br>
# outputs/zero-summary-v1-beta3
This model is a fine-tuned version of [ZeroAgency/Zero-Mistral-24B](https://huggingface.co/ZeroAgency/Zero-Mistral-24B) on the bethrezen/thinking-summary-v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Rahmaa33/MotherDuckTEXT2SQLL | Rahmaa33 | 2025-05-03T14:04:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:motherduckdb/DuckDB-NSQL-7B-v0.1",
"base_model:adapter:motherduckdb/DuckDB-NSQL-7B-v0.1",
"region:us"
] | null | 2025-05-03T14:04:45Z | ---
base_model: motherduckdb/DuckDB-NSQL-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
apriasmoro/29761ecf-4baf-4215-b6f1-d66c8daf40a6 | apriasmoro | 2025-05-03T14:03:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-05-03T14:00:10Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 29761ecf-4baf-4215-b6f1-d66c8daf40a6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f8164dbb54597854_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f8164dbb54597854_train_data.json
type:
field_input: description
field_instruction: article
field_output: reference
field_system: None
format: None
no_input_format: None
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: apriasmoro/29761ecf-4baf-4215-b6f1-d66c8daf40a6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f8164dbb54597854_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 29761ecf-4baf-4215-b6f1-d66c8daf40a6
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1906 | 0.0004 | 1 | 3.0166 |
| 3.1564 | 0.0012 | 3 | 2.9789 |
| 2.8283 | 0.0024 | 6 | 2.6236 |
| 2.3188 | 0.0036 | 9 | 2.1081 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
cernigyerkelty7v/dfgbfgb | cernigyerkelty7v | 2025-05-03T14:03:12Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-05-03T14:03:12Z | ---
license: bsd-3-clause
---
|
eaddario/OLMo-2-1124-7B-Instruct-GGUF | eaddario | 2025-05-03T13:59:07Z | 0 | 0 | null | [
"gguf",
"quant",
"experimental",
"text-generation",
"en",
"dataset:eaddario/imatrix-calibration",
"arxiv:2501.00656",
"arxiv:2411.15124",
"arxiv:2406.17415",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:quantized:allenai/OLMo-2-1124-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T11:07:39Z | ---
base_model:
- allenai/OLMo-2-1124-7B-Instruct
datasets:
- eaddario/imatrix-calibration
language:
- en
license:
- apache-2.0
pipeline_tag: text-generation
tags:
- gguf
- quant
- experimental
---
# Experimental layer-wise quantization of allenai/OLMo-2-1124-7B-Instruct
Using [LLaMA C++][llm] release [b5220][llm-rel] for quantization.
Original model: [allenai/OLMo-2-1124-7B-Instruct][mdl]
From the original model creators:
> OLMo 2 7B Instruct November 2024 is post-trained variant of the [OLMo-2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
> Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
> Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
>
> OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
> These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
> The core models released in this batch include the following:
>
> | **Stage** | **OLMo 2 7B** | **OLMo 2 13B** |
> |-------------------------|-------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|
> | **Base Model** | [allenai/OLMo2-7B-1124](https://huggingface.co/allenai/OLMo2-7B-1124) | [allenai/OLMo-2-13B-1124](https://huggingface.co/allenai/OLMo-2-13B-1124) |
> | **SFT** | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) |
> | **DPO** | [allenai/OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) |
> | **Final Models (RLVR)** | [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) | [allenai/OLMo-2-1124-13B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-13B-Instruct) |
> | **Reward Model (RM)** | [allenai/OLMo-2-1124-7B-RM](https://huggingface.co/allenai/OLMo-2-1124-7B-RM) | [allenai/OLMo-2-1124-13B-RM](https://huggingface.co/allenai/OLMo-2-1124-13B-RM) |
# PLEASE READ THIS BEFORE USING THESE EXPERIMENTAL VERSIONS!
An area of personal interest is finding ways to optimize the inference performance of LLMs when deployed in resource-constrained environments like commodity hardware, desktops, laptops, mobiles, edge devices, etc. There are many approaches to accomplish this, including architecture simplification and knowledge distillation, but my focus has been primarily on quantization and pruning.
The method used to produce these experimental versions is covered in [Squeezing Tensor Bits: the quest for smaller LLMs][mdm], but at a high level it involves using a custom version of `llama-imatrix` and `llama-quantize` to identify influential tensors, and quantize the most important layers to higher bit precision and the less important to lower bits. This process was partly inspired by Dumitru's et al [Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels][lwq-ppr].
As of version [b5125][qtz-rel] [llama-quantize][qtz] can now perform **tensor-wide quantization (TWQ)**, whereby user-defined tensors are quantized at a specific level, or perform **layer-wise quantization (LWQ)** by selecting different quantization types per tensor/layer. For example, `--tensor-type attn_v=q6_k` will quantize all *Attention Value* tensors at *q6_k* (TWQ), and `--tensor-type "\.([0-9]|1[01257]|31)\.attn_k=q4_k"` will quantize *Attention Key* tensors on layers 0 to 9, 10, 11, 12, 15, 17 and 31 at *q4_k*, leaving the remaining layers at their default value (LWQ).
The modified version of [llama-imatrix][imx] generates useful statistics to guide the tensor selection process, `--show-statistics` will display:
- **Σ(Bias):** the sum of all activations over the tensor (i.e. the Importance Scores)
- **Min & Max:** minimum and maximum activation values
- **μ & σ:** activations' mean and standard deviation
- **% Active:** proportion of elements whose average activation exceeds a very small threshold (1e-6). Helpful to determine how alive/dormant the tensor is during inference
- **N:** number of activations in the tensor
- **Entropy:** entropy of the activation distribution, in bits (standard Shannon entropy measurement)
- **E (norm):** Normalized entropy.
- **ZD Score:** z-score distribution as described in 3.1 Layer Importance Scores in the Layer-Wise Quantization paper
- **CosSim:** cosine similarity between same type tensors with respect to the previous layer (i.e. blk.7.attn_k and blk.6.attn_k)
Please note that statistics are calculated for each individial tensor and should be used to compare between tensors of the same type only. For example, assuming that *attn_k* in layer 10 has a higher influence during inference than *attn_k* in layer 7 because its **Σ(Bias)** is larger makes sense, whilst concluding the same between *attn_k* and *ffn_down* does not.
There’s a [pull request][imtx-pr] to merge these changes back into the core llama.cpp project. This may or may not ever happen so, until then, the modified version will be available on [GitHub][gh].
For testing and comparison I use models produced by [Unsloth][ust] ([Daniel and Michael Han][ust-ai] do some really advanced level stuff!) and [Bartowski][btk] (see credits below) but if they don't provide versions of the required model, all tests and comparisons are done against naive quantizations obtained by simply running `llama-quantize` with no further optimization.
All experimental versions were generated using an appropriate imatrix created from calibration datasets available at [eaddario/imatrix-calibration][ical]. At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled, and it helps to counterbalance the negative effects of quantization and pruning.
The process to generate these models is roughly as follows:
1. Convert the the original model's tensors to [GGUF][ggf] F16*
2. Estimate the Perplexity score for the F16 model (baseline) using the [wikitext-2-raw-v1][wki-dat] dataset, and save the [logits][lgt]
3. Generate an [imatrix][imx-dat] from selected calibration datasets
4. Determine tensor and layer Importance Score contribution using the modified version of `llama-imatrix`
5. Select an appropiate quant level for each tensor and quantize the model using `llama-quantize`
6. Calculate Perplexity, KL Divergence, ARC (Easy+Challenge), HellaSwag, MMLU, Truthful QA and WinoGrande scores for each quantized model
7. Keep versions with the best scores
8. Repeat until all desired quants are created. I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.
*[BF16][bf16] would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16
# Models
### Sizes (in GB)
| Model | Bartowski | Repo | Shrinkage |
| ----------------------------------------------------------------------- | --------: | ---: | --------: |
| [OLMo-2-1124-7B-Instruct-IQ3_M](./OLMo-2-1124-7B-Instruct-IQ3_M.gguf) | 3.78 | 3.69 | 2.4% |
| [OLMo-2-1124-7B-Instruct-IQ3_S](./OLMo-2-1124-7B-Instruct-IQ3_S.gguf) | 3.68 | 3.43 | 6.8% |
| [OLMo-2-1124-7B-Instruct-IQ4_NL](./OLMo-2-1124-7B-Instruct-IQ4_NL.gguf) | 4.71 | 4.39 | 6.2% |
| [OLMo-2-1124-7B-Instruct-Q3_K_L](./OLMo-2-1124-7B-Instruct-Q3_K_L.gguf) | 4.32 | 3.76 | 13.0% |
| [OLMo-2-1124-7B-Instruct-Q3_K_M](./OLMo-2-1124-7B-Instruct-Q3_K_M.gguf) | 4.02 | 3.56 | 11.4% |
| [OLMo-2-1124-7B-Instruct-Q3_K_S](./OLMo-2-1124-7B-Instruct-Q3_K_S.gguf) | 3.66 | 3.31 | 9.6% |
| [OLMo-2-1124-7B-Instruct-Q4_K_M](./OLMo-2-1124-7B-Instruct-Q4_K_M.gguf) | 4.92 | 4.41 | 10.4% |
| [OLMo-2-1124-7B-Instruct-Q4_K_S](./OLMo-2-1124-7B-Instruct-Q4_K_S.gguf) | 4.69 | 4.28 | 8.7% |
| [OLMo-2-1124-7B-Instruct-Q5_K_M](./OLMo-2-1124-7B-Instruct-Q5_K_M.gguf) | 5.73 | 5.38 | 6.1% |
| [OLMo-2-1124-7B-Instruct-Q5_K_S](./OLMo-2-1124-7B-Instruct-Q5_K_S.gguf) | 5.60 | 5.24 | 6.4% |
| [OLMo-2-1124-7B-Instruct-Q6_K](./OLMo-2-1124-7B-Instruct-Q6_K.gguf) | 6.60 | 6.57 | 0.5% |
| [OLMo-2-1124-7B-Instruct-Q8_0](./OLMo-2-1124-7B-Instruct-Q8_0.gguf) | 8.54 | 7.73 | 9.5% |
### Perplexity and KL Divergence scores
| Model | μPPL | 𝜌PPL | μKLD | RMS Δp |
| ----------------------------------------------------------------------- | -----------------: | -----: | -----------------: | ------------: |
| [OLMo-2-1124-7B-Instruct-IQ3_M](./OLMo-2-1124-7B-Instruct-IQ3_M.gguf) | 9.201710 ±0.071255 | 96.92% | 0.153149 ±0.000797 | 11.390 ±0.060 |
| [OLMo-2-1124-7B-Instruct-IQ3_S](./OLMo-2-1124-7B-Instruct-IQ3_S.gguf) | 9.306264 ±0.071084 | 95.94% | 0.197699 ±0.000965 | 12.938 ±0.062 |
| [OLMo-2-1124-7B-Instruct-IQ4_NL](./OLMo-2-1124-7B-Instruct-IQ4_NL.gguf) | 8.680650 ±0.065689 | 98.37% | 0.076583 ±0.000454 | 8.111 ±0.049 |
| [OLMo-2-1124-7B-Instruct-Q3_K_L](./OLMo-2-1124-7B-Instruct-Q3_K_L.gguf) | 9.252820 ±0.070404 | 95.74% | 0.204708 ±0.001020 | 13.139 ±0.063 |
| [OLMo-2-1124-7B-Instruct-Q3_K_M](./OLMo-2-1124-7B-Instruct-Q3_K_M.gguf) | 9.242884 ±0.069850 | 95.38% | 0.220640 ±0.001086 | 13.659 ±0.065 |
| [OLMo-2-1124-7B-Instruct-Q3_K_S](./OLMo-2-1124-7B-Instruct-Q3_K_S.gguf) | 9.651383 ±0.073494 | 94.01% | 0.287772 ±0.001362 | 15.534 ±0.069 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M](./OLMo-2-1124-7B-Instruct-Q4_K_M.gguf) | 8.683512 ±0.065748 | 98.46% | 0.071862 ±0.000424 | 7.858 ±0.048 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M-bartowski][b-q4km] | 7.951677 ±0.058104 | 97.11% | 0.144072 ±0.001123 | 10.504 ±0.065 |
| [OLMo-2-1124-7B-Instruct-Q4_K_S](./OLMo-2-1124-7B-Instruct-Q4_K_S.gguf) | 8.665009 ±0.065466 | 98.38% | 0.076159 ±0.000445 | 8.104 ±0.049 |
| [OLMo-2-1124-7B-Instruct-Q5_K_M](./OLMo-2-1124-7B-Instruct-Q5_K_M.gguf) | 8.475671 ±0.064030 | 99.41% | 0.025628 ±0.000174 | 4.820 ±0.037 |
| [OLMo-2-1124-7B-Instruct-Q5_K_S](./OLMo-2-1124-7B-Instruct-Q5_K_S.gguf) | 8.494382 ±0.064237 | 99.39% | 0.026960 ±0.000180 | 4.932 ±0.038 |
| [OLMo-2-1124-7B-Instruct-Q6_K](./OLMo-2-1124-7B-Instruct-Q6_K.gguf) | 8.425234 ±0.063616 | 99.67% | 0.013181 ±0.000105 | 3.474 ±0.032 |
| [OLMo-2-1124-7B-Instruct-Q8_0](./OLMo-2-1124-7B-Instruct-Q8_0.gguf) | 8.416597 ±0.063592 | 99.74% | 0.009659 ±0.000089 | 2.993 ±0.031 |
| [OLMo-2-1124-7B-Instruct-F16](./OLMo-2-1124-7B-Instruct-F16.gguf) | 8.368713 ±0.062985 | 100% | N/A | N/A |
### ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
Scores generated using [llama-perplexity][ppl] with 750 tasks per test, and a context size of 768 tokens.
For the test data used in the generation of these scores, follow the appropiate links: [HellaSwag][hsw-tst], [ARC, MMLU, Truthful QA][tst-dat] and [WinoGrande][wng-tst]
| Model | ARC | HellaSwag | MMLU | Truthful QA | WinoGrande | Avg Score |
| ----------------------------------------------------------------------- | --------------: | --------: | --------------: | --------------: | --------------: | --------: |
| [OLMo-2-1124-7B-Instruct-IQ3_M](./OLMo-2-1124-7B-Instruct-IQ3_M.gguf) | 64.9333 ±1.7436 | 82.53 | 40.9333 ±1.7967 | 34.8000 ±1.7405 | 72.5333 ±1.6309 | 59.15 |
| [OLMo-2-1124-7B-Instruct-IQ3_S](./OLMo-2-1124-7B-Instruct-IQ3_S.gguf) | 65.2000 ±1.7405 | 82.66 | 41.3333 ±1.7993 | 33.6000 ±1.7259 | 71.7333 ±1.6453 | 58.91 |
| [OLMo-2-1124-7B-Instruct-IQ4_NL](./OLMo-2-1124-7B-Instruct-IQ4_NL.gguf) | 67.0667 ±1.7172 | 83.33 | 41.7333 ±1.8018 | 37.3333 ±1.7674 | 74.4000 ±1.5947 | 60.77 |
| [OLMo-2-1124-7B-Instruct-Q3_K_L](./OLMo-2-1124-7B-Instruct-Q3_K_L.gguf) | 64.5333 ±1.7481 | 81.47 | 40.9333 ±1.7967 | 33.3333 ±1.7225 | 72.5333 ±1.6309 | 58.56 |
| [OLMo-2-1124-7B-Instruct-Q3_K_M](./OLMo-2-1124-7B-Instruct-Q3_K_M.gguf) | 63.4667 ±1.7594 | 81.86 | 41.3333 ±1.7993 | 33.6000 ±1.7259 | 73.2000 ±1.6184 | 58.69 |
| [OLMo-2-1124-7B-Instruct-Q3_K_S](./OLMo-2-1124-7B-Instruct-Q3_K_S.gguf) | 64.9333 ±1.7436 | 81.60 | 40.4000 ±1.7930 | 33.2000 ±1.7207 | 71.8667 ±1.6430 | 58.40 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M](./OLMo-2-1124-7B-Instruct-Q4_K_M.gguf) | 66.5333 ±1.7242 | 83.87 | 42.0000 ±1.8034 | 36.9333 ±1.7635 | 71.4667 ±1.6500 | 60.16 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M-bartowski][b-q4km] | 65.8667 ±1.7325 | 82.40 | 42.1333 ±1.8042 | 34.0000 ±1.7309 | 74.2667 ±1.5974 | 59.73 |
| [OLMo-2-1124-7B-Instruct-Q4_K_S](./OLMo-2-1124-7B-Instruct-Q4_K_S.gguf) | 66.2667 ±1.7276 | 83.87 | 42.5333 ±1.8065 | 36.6667 ±1.7608 | 71.3333 ±1.6523 | 60.13 |
| [OLMo-2-1124-7B-Instruct-Q5_K_M](./OLMo-2-1124-7B-Instruct-Q5_K_M.gguf) | 67.4667 ±1.7119 | 83.33 | 42.0000 ±1.8034 | 37.6000 ±1.7699 | 74.4000 ±1.5947 | 60.96 |
| [OLMo-2-1124-7B-Instruct-Q5_K_S](./OLMo-2-1124-7B-Instruct-Q5_K_S.gguf) | 67.3333 ±1.7137 | 83.47 | 42.0000 ±1.8034 | 37.2000 ±1.7661 | 74.8000 ±1.5864 | 60.96 |
| [OLMo-2-1124-7B-Instruct-Q6_K](./OLMo-2-1124-7B-Instruct-Q6_K.gguf) | 67.0667 ±1.7172 | 83.33 | 42.2667 ±1.8050 | 37.4667 ±1.7686 | 74.4000 ±1.5947 | 60.91 |
| [OLMo-2-1124-7B-Instruct-Q8_0](./OLMo-2-1124-7B-Instruct-Q8_0.gguf) | 66.6667 ±1.7225 | 83.20 | 42.4000 ±1.8057 | 37.7333 ±1.7711 | 73.8667 ±1.6054 | 60.77 |
| [OLMo-2-1124-7B-Instruct-F16](./OLMo-2-1124-7B-Instruct-F16.gguf) | 67.3333 ±1.7137 | 83.20 | 41.8667 ±1.8026 | 37.8667 ±1.7724 | 72.6667 ±1.6284 | 60.59 |
### Tokens per Second - Benchmarks
Scores generated using [llama-bench][bch]. Naive (`llama-quantize` with no optimization) Q4_K_M quantization included for comparison.
| model | size | params | backend | threads | test | t/s |
| ----------------------------------------------------------------------- | -------: | -----: | ---------- | ------: | ------------: | ------------: |
| [OLMo-2-1124-7B-Instruct-Q4_K_M](./OLMo-2-1124-7B-Instruct-Q4_K_M.gguf) | 3.73 GiB | 7.30 B | Metal,BLAS | 6 | pp512 | 331.23 ± 0.55 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M](./OLMo-2-1124-7B-Instruct-Q4_K_M.gguf) | 3.73 GiB | 7.30 B | Metal,BLAS | 6 | tg128 | 29.25 ± 0.19 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M](./OLMo-2-1124-7B-Instruct-Q4_K_M.gguf) | 3.73 GiB | 7.30 B | Metal,BLAS | 6 | pp1024+tg1024 | 44.26 ± 0.13 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M-bartowski][b-q4km] | 4.16 GiB | 7.30 B | Metal,BLAS | 6 | pp512 | 345.11 ± 0.95 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M-bartowski][b-q4km] | 4.16 GiB | 7.30 B | Metal,BLAS | 6 | tg128 | 27.54 ± 0.15 |
| [OLMo-2-1124-7B-Instruct-Q4_K_M-bartowski][b-q4km] | 4.16 GiB | 7.30 B | Metal,BLAS | 6 | pp1024+tg1024 | 42.76 ± 0.18 |
# Metrics used
**[Perplexity][ppx]:** one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of **1** indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.
**[Kullback–Leibler (KL) Divergence][kld]:** a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the original model the better, thus the closest to **0** the better.
**[AI2 Reasoning Challenge (ARC)][arc]:** a benchmark to evaluate the ability of AI models to answer complex science questions that require logical reasoning beyond pattern matching.
**[HellaSwag][hsw]:** the Harder Endings, Longer contexts, and Low-shot Activities for Situations With Adversarial Generations (bit of a mouthful!) is a benchmark designed to test commonsense natural language inference. It requires the model to predict the most likely ending of a sentence.
**[MMLU][mmlu]:** the Massive Multitask Language Understanding evaluates LLMs’ general knowledge and problem-solving abilities across 57 subjects, including elementary mathematics, US history, computer science, and law.
**[Truthful QA][tqa]:** evaluates how well LLMs generate truthful responses to questions. It identifies whether AI models can avoid generating false or misleading information, particularly in areas where human knowledge is prone to misconceptions.
**[Winogrande][wng]:** based on the [Winograd Schema Challenge][wng-chl], is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.
## Credits
A big **Thank You!** to [Colin Kealty][btk] for the many contributions and for being one of the best sources of high quality quantized models available on Huggingface, and a really big ***Thank You!*** to [Georgi Gerganov][ggg] for his amazing work with **llama.cpp** and the **ggml/gguf** libraries.
[arc]: https://leaderboard.allenai.org/arc/submissions/get-started
[btk]: https://huggingface.co/bartowski
[bch]: https://github.com/ggml-org/llama.cpp/tree/master/tools/llama-bench
[bf16]: https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
[b-q4km]: https://huggingface.co/bartowski/OLMo-2-1124-7B-Instruct-GGUF/blob/main/OLMo-2-1124-7B-Instruct-Q4_K_M.gguf
[u-q4km]: https://huggingface.co/unsloth
[ical]: https://huggingface.co/datasets/eaddario/imatrix-calibration
[ggg]: https://github.com/ggerganov
[ggf]: https://huggingface.co/docs/hub/en/gguf
[gh]: https://github.com/EAddario/llama.cpp/tree/imatrix
[hsw]: https://rowanzellers.com/hellaswag
[hsw-tst]: https://github.com/klosax/hellaswag_text_data
[imx-dat]: https://huggingface.co/eaddario/OLMo-2-1124-7B-Instruct-GGUF/tree/main/imatrix
[imx]: https://github.com/ggml-org/llama.cpp/tree/master/tools/imatrix
[imtx-pr]: https://github.com/ggml-org/llama.cpp/pull/12718
[kld]: https://en.wikipedia.org/wiki/Kullback–Leibler_divergence
[llm]: https://github.com/ggerganov/llama.cpp
[llm-rel]: https://github.com/ggerganov/llama.cpp/releases/tag/b5220
[lgt]: https://huggingface.co/eaddario/OLMo-2-1124-7B-Instruct-GGUF/tree/main/logits
[lwq-ppr]: https://arxiv.org/abs/2406.17415
[mdm]: https://medium.com/@eaddario/squeezing-tensor-bits-the-quest-for-smaller-llms-86b23bd052ca
[mmlu]: https://github.com/hendrycks/test
[mdl]: https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct
[ppl]: https://github.com/ggml-org/llama.cpp/tree/master/tools/perplexity
[ppx]: https://huggingface.co/docs/transformers/en/perplexity
[qtz]: https://github.com/ggml-org/llama.cpp/tree/master/tools/quantize
[qtz-rel]: https://github.com/ggerganov/llama.cpp/releases/tag/b5125
[tst-dat]: https://huggingface.co/datasets/ikawrakow/validation-datasets-for-llama.cpp/tree/main
[tqa]: https://github.com/sylinrl/TruthfulQA
[ust]: https://huggingface.co/unsloth
[ust-ai]: https://unsloth.ai
[wng-chl]: https://cdn.aaai.org/ocs/4492/4492-21843-1-PB.pdf
[wki-dat]: https://huggingface.co/datasets/Salesforce/wikitext/tree/main/wikitext-2-raw-v1
[wng]: https://winogrande.allenai.org
[wng-tst]: https://huggingface.co/datasets/ikawrakow/winogrande-eval-for-llama.cpp/tree/main
|
arielb30/videomae-base-finetuned-hmdb51_dataset | arielb30 | 2025-05-03T13:54:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-03T13:08:43Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-hmdb51_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-hmdb51_dataset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3945
- Accuracy: 0.8798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 380
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.8738 | 0.1026 | 39 | 1.6265 | 0.4299 |
| 1.1232 | 1.1026 | 78 | 0.9534 | 0.7430 |
| 0.86 | 2.1026 | 117 | 0.7309 | 0.7897 |
| 0.5001 | 3.1026 | 156 | 0.5999 | 0.8131 |
| 0.4402 | 4.1026 | 195 | 0.5475 | 0.8178 |
| 0.3291 | 5.1026 | 234 | 0.5903 | 0.8505 |
| 0.2643 | 6.1026 | 273 | 0.5727 | 0.8271 |
| 0.1928 | 7.1026 | 312 | 0.4194 | 0.8972 |
| 0.1749 | 8.1026 | 351 | 0.4278 | 0.8832 |
| 0.1588 | 9.0763 | 380 | 0.4737 | 0.8832 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
bean980310/aino-koito-xl-animagine-xl-4-v1 | bean980310 | 2025-05-03T13:51:48Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:cagliostrolab/animagine-xl-4.0",
"base_model:adapter:cagliostrolab/animagine-xl-4.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-04-22T12:51:13Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
1girl, Aino Koito, sensitive, year 2025, outdoors, stage, stage lights,
solo, tomboy, very short hair, blonde hair, very handsome face, aqua eyes,
large breasts, a adult very handsone tomboy girl with very boyish handsome
shortcut blonde hair with pixie cut and very handsome face and aqua eyes and
perfect female body and large breasts wearing yellow idol uniform and idol
clothes and pleated miniskirt and gloves and thighhighs and high heels and
standing and singing with grab microphone, masterpiece, high score, great
score, absurdres
parameters:
negative_prompt: >-
old, early, mid, grass, leaf, girlish face, girly hair, sidelocks,
muscular body, mutation, extra limbs, extra legs, extra arms, extra hands,
missing limbs, missing legs, missing arms, missing hands, headwear, hat,
cap, beret, hairpin, hairband, hair ribbon, hair ornament, hairclip,
lowres, bad anatomy, bad hands, text, error, missing finger, extra digits,
fewer digits, cropped, worst quality, low quality, low score, bad score,
average score, signature, watermark, username, blurry
output:
url: images/ComfyUI_upscale_00024_.png
- text: >-
1girl, Aino Koito, sensitive, year 2025, outdoors, town, solo, tomboy, very
short hair, blonde hair, very handsome face, aqua eyes, large breasts, a
adult very handsone tomboy girl with very boyish handsome shortcut blonde
hair with pixie cut and very handsome face and aqua eyes and perfect female
body and large breasts wearing white military uniform with epaulettes and
aiguillette and military blue skirt and military white legwear and military
white footwear and military white jacket with white long sleeves and white
buttoned shirt and white lace trim bridal gloves and white cape with
shoulder cape and brown belt with gold belt buckle and pleated blue
miniskirt with white stripes and white thighhighs with floral lace trim and
white high heels and standing with legs together and hand on hip and serious
face, masterpiece, high score, great score, absurdres
parameters:
negative_prompt: >-
old, early, mid, grass, leaf, girlish face, girly hair, sidelocks,
muscular body, mutation, extra limbs, extra legs, extra arms, extra hands,
missing limbs, missing legs, missing arms, missing hands, headwear, hat,
cap, beret, hairpin, hairband, hair ribbon, hair ornament, hairclip,
lowres, bad anatomy, bad hands, text, error, missing finger, extra digits,
fewer digits, cropped, worst quality, low quality, low score, bad score,
average score, signature, watermark, username, blurry
output:
url: images/ComfyUI_upscale_00021_.png
- text: >-
1girl, Aino Koito, sensitive, year 2025, indoors, teacher's room, solo,
tomboy, very short hair, blonde hair, very handsome face, aqua eyes, large
breasts, a adult very handsone tomboy girl with very boyish handsome
shortcut blonde hair with pixie cut and very handsome face and aqua eyes and
perfect female body and large breasts wearing teacher suit and tight
miniskirt and shirt and pantystocking and high heels and sitting on chair
with crossed legs and closed mouth, masterpiece, high score, great score,
absurdres
parameters:
negative_prompt: >-
old, early, mid, grass, leaf, girlish face, girly hair, sidelocks,
muscular body, mutation, extra limbs, extra legs, extra arms, extra hands,
missing limbs, missing legs, missing arms, missing hands, headwear, hat,
cap, beret, hairpin, hairband, hair ribbon, hair ornament, hairclip,
lowres, bad anatomy, bad hands, text, error, missing finger, extra digits,
fewer digits, cropped, worst quality, low quality, low score, bad score,
average score, signature, watermark, username, blurry
output:
url: images/ComfyUI_upscale_00027_.png
base_model: cagliostrolab/animagine-xl-4.0
instance_prompt: >-
Aino Koito, tomboy, very short hair, blonde hair, very handsome face, aqua
eyes, large breasts, a adult very handsone tomboy girl with very boyish
handsome shortcut blonde hair with pixie cut and very handsome face and aqua
eyes and perfect female body and large breasts
license: creativeml-openrail-m
---
# Original Character - 愛野小糸(Aino Koito) XL for Animagine XL 4.0 v1
<Gallery />
## Model description



Koito Aino, the tomboyish idol girl
for Animagine XL 4.0
**Trained Data**
Steps: 3030
Epoch: 10
Clip Skip:2
Images:402
Training Model: cagliostrolab/animagine-xl-4.0-zero
Learning rate: 1e-4
Unet Learning rate : 1e-4
TE Learning rate :1e-5
LR Scheduler: Cosine with restarts
Network Dim: 32
Network Alpha: 16
Train Batch Size: 4
Mixed Precision: bf16
Optimizer Args: scale_parameter=False relative_step=False warmup_init=False
## Trigger words
You should use `Aino Koito` to trigger the image generation.
You should use `tomboy` to trigger the image generation.
You should use `very short hair` to trigger the image generation.
You should use `blonde hair` to trigger the image generation.
You should use `very handsome face` to trigger the image generation.
You should use `aqua eyes` to trigger the image generation.
You should use `large breasts` to trigger the image generation.
You should use `a adult very handsone tomboy girl with very boyish handsome shortcut blonde hair with pixie cut and very handsome face and aqua eyes and perfect female body and large breasts` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/bean980310/aino-koito-xl-animagine-xl-4-v1/tree/main) them in the Files & versions tab.
|
duandongsheng/ddpm-celebahq-finetuned-butterflies-2epochs | duandongsheng | 2025-05-03T13:51:37Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-05-03T13:38:16Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('duandongsheng/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
bean980310/makotono-aoi-xl-animagine-xl-4-v1 | bean980310 | 2025-05-03T13:51:17Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:cagliostrolab/animagine-xl-4.0",
"base_model:adapter:cagliostrolab/animagine-xl-4.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-04-22T12:44:17Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
1girl, Makotono Aoi, sensitive, year 2025, outdoors, stage, stage lights,
solo, tomboy, very short hair, blue hair, parted bangs, very handsome face,
yellow eyes, large breasts, a adult tomboy girl with very boyish handsome
shortcut blue hair with (parted bangs:1.2) and pixie cut and very handsome
face and yellow eyes and perfect female body and large breasts wearing blue
idol uniform and idol clothes and pleated miniskirt and gloves and
thighhighs and high heels and standing and singing with grab microphone,
masterpiece, high score, great score, absurdres
parameters:
negative_prompt: >-
old, early, mid, grass, leaf, girly hair, muscular body, mutation, extra
limbs, extra legs, extra arms, extra hands, missing limbs, missing legs,
missing arms, missing hands, headwear, hat, cap, beret, hairpin, hairband,
hair ribbon, hair ornament, hairclip, lowres, bad anatomy, bad hands,
text, error, missing finger, extra digits, fewer digits, cropped, worst
quality, low quality, low score, bad score, average score, signature,
watermark, username, blurry
output:
url: images/ComfyUI_upscale_00023_.png
- text: >-
1girl, Makotono Aoi, sensitive, year 2025, outdoors, solo, town, tomboy,
very short hair, blue hair, parted bangs, very handsome face, yellow eyes,
large breasts, a adult tomboy girl with very boyish handsome shortcut blue
hair with (parted bangs:1.2) and pixie cut and very handsome face and yellow
eyes and perfect female body and large breasts wearing white military
uniform with epaulettes and aiguillette and military blue skirt and military
white legwear and military white footwear and military white jacket with
white long sleeves and white buttoned shirt and white lace trim bridal
gloves and white cape with shoulder cape and brown belt with gold belt
buckle and pleated blue miniskirt with white stripes and white thighhighs
with floral lace trim and white high heels and standing with legs together
and hand on hip and serious face, masterpiece, high score, great score,
absurdres
parameters:
negative_prompt: >-
old, early, mid, grass, leaf, girly hair, muscular body, mutation, extra
limbs, extra legs, extra arms, extra hands, missing limbs, missing legs,
missing arms, missing hands, headwear, hat, cap, beret, hairpin, hairband,
hair ribbon, hair ornament, hairclip, lowres, bad anatomy, bad hands,
text, error, missing finger, extra digits, fewer digits, cropped, worst
quality, low quality, low score, bad score, average score, signature,
watermark, username, blurry
output:
url: images/ComfyUI_upscale_00020_.png
- text: >-
1girl, Makotono Aoi, sensitive, year 2025, indoors, teacher's room, solo,
tomboy, very short hair, blue hair, parted bangs, very handsome face, yellow
eyes, large breasts, a adult tomboy girl with very boyish handsome shortcut
blue hair with (parted bangs:1.2) and pixie cut and very handsome face and
yellow eyes and perfect female body and large breasts wearing teacher suit
and tight miniskirt and shirt and pantystocking and high heels and sitting
on chair with crossed legs and closed mouth, masterpiece, high score, great
score, absurdres
parameters:
negative_prompt: >-
old, early, mid, grass, leaf, girly hair, muscular body, mutation, extra
limbs, extra legs, extra arms, extra hands, missing limbs, missing legs,
missing arms, missing hands, headwear, hat, cap, beret, hairpin, hairband,
hair ribbon, hair ornament, hairclip, lowres, bad anatomy, bad hands,
text, error, missing finger, extra digits, fewer digits, cropped, worst
quality, low quality, low score, bad score, average score, signature,
watermark, username, blurry
output:
url: images/ComfyUI_upscale_00026_.png
base_model: cagliostrolab/animagine-xl-4.0
instance_prompt: >-
Makotono Aoi, tomboy, very short hair, blue hair, parted bangs, very handsome
face, yellow eyes, large breasts, a adult tomboy girl with very boyish
handsome shortcut blue hair with (parted bangs:1.2) and pixie cut and very
handsome face and yellow eyes and perfect female body and large breasts
license: creativeml-openrail-m
---
# Original Character - 真琴乃葵(Makotono Aoi) XL for Animagine XL 4.0 v1
<Gallery />
## Model description



Aoi Makotono, the tomboyish idol girl
for Animagine XL 4.0
**Trained Data**
Steps: 3000
Epoch: 10
Clip Skip:2
Images:400
Training Model: cagliostrolab/animagine-xl-4.0-zero
Learning rate: 1e-4
Unet Learning rate : 1e-4
TE Learning rate :1e-5
LR Scheduler: Cosine with restarts
Network Dim: 32
Network Alpha: 16
Train Batch Size: 4
Mixed Precision: bf16
Optimizer Args: scale_parameter=False relative_step=False warmup_init=False
## Trigger words
You should use `Makotono Aoi` to trigger the image generation.
You should use `tomboy` to trigger the image generation.
You should use `very short hair` to trigger the image generation.
You should use `blue hair` to trigger the image generation.
You should use `parted bangs` to trigger the image generation.
You should use `very handsome face` to trigger the image generation.
You should use `yellow eyes` to trigger the image generation.
You should use `large breasts` to trigger the image generation.
You should use `a adult tomboy girl with very boyish handsome shortcut blue hair with (parted bangs:1.2) and pixie cut and very handsome face and yellow eyes and perfect female body and large breasts` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/bean980310/makotono-aoi-xl-animagine-xl-4-v1/tree/main) them in the Files & versions tab.
|
PandaLikesPotato/georgian_letters | PandaLikesPotato | 2025-05-03T13:49:31Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T13:49:31Z | ---
license: apache-2.0
---
|
arkitex/wav2vec2-finetune-authentic-only | arkitex | 2025-05-03T13:48:25Z | 56 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-17T18:51:35Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-finetune-authentic-only
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: en
split: None
args: en
metrics:
- type: wer
value: 0.31685876147685094
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-finetune-authentic-only
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4832
- Wer: 0.3169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 4.2515 | 0.1337 | 500 | 2.9469 | 1.0 |
| 1.4619 | 0.2674 | 1000 | 0.9826 | 0.5937 |
| 0.7206 | 0.4011 | 1500 | 0.8086 | 0.4917 |
| 0.6012 | 0.5348 | 2000 | 0.7485 | 0.4530 |
| 0.5422 | 0.6684 | 2500 | 0.7128 | 0.4329 |
| 0.5063 | 0.8021 | 3000 | 0.6346 | 0.4055 |
| 0.479 | 0.9358 | 3500 | 0.6450 | 0.4001 |
| 0.44 | 1.0695 | 4000 | 0.6126 | 0.3856 |
| 0.4103 | 1.2032 | 4500 | 0.5970 | 0.3747 |
| 0.396 | 1.3369 | 5000 | 0.5792 | 0.3780 |
| 0.3822 | 1.4706 | 5500 | 0.5786 | 0.3643 |
| 0.3706 | 1.6043 | 6000 | 0.5387 | 0.3507 |
| 0.3669 | 1.7380 | 6500 | 0.5292 | 0.3546 |
| 0.3544 | 1.8717 | 7000 | 0.5145 | 0.3436 |
| 0.3492 | 2.0053 | 7500 | 0.5322 | 0.3342 |
| 0.3066 | 2.1390 | 8000 | 0.5284 | 0.3323 |
| 0.3006 | 2.2727 | 8500 | 0.5248 | 0.3333 |
| 0.2954 | 2.4064 | 9000 | 0.4983 | 0.3221 |
| 0.2914 | 2.5401 | 9500 | 0.4844 | 0.3202 |
| 0.2841 | 2.6738 | 10000 | 0.4881 | 0.3167 |
| 0.2816 | 2.8075 | 10500 | 0.4815 | 0.3167 |
| 0.2777 | 2.9412 | 11000 | 0.4832 | 0.3169 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0
- Datasets 3.5.0
- Tokenizers 0.21.1
|
hangcai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deft_small_cod | hangcai | 2025-05-03T13:48:12Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am deft small cod",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T20:21:42Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deft_small_cod
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am deft small cod
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deft_small_cod
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hangcai/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deft_small_cod", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
thembocuroj/dfgbfgbfgb | thembocuroj | 2025-05-03T13:47:53Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-03T13:47:53Z | ---
license: bigscience-openrail-m
---
|
yolo765/sdxl-naruto-model | yolo765 | 2025-05-03T13:37:29Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-03T00:14:11Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers-training
- diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - yolo765/sdxl-naruto-model
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **lambdalabs/naruto-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
memeviss/zombieIX_9 | memeviss | 2025-05-03T13:35:54Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-03T11:30:57Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
Triangle104/Gemma-3-Starshine-12B-Q8_0-GGUF | Triangle104 | 2025-05-03T13:35:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ToastyPigeon/Gemma-3-Starshine-12B",
"base_model:quantized:ToastyPigeon/Gemma-3-Starshine-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T13:33:23Z | ---
base_model: ToastyPigeon/Gemma-3-Starshine-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-Starshine-12B-Q8_0-GGUF
This model was converted to GGUF format from [`ToastyPigeon/Gemma-3-Starshine-12B`](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) for more details on the model.
---
---
A creative writing model based on a merge of fine-tunes on Gemma 3 12B IT and Gemma 3 12B PT.
This is the Story Focused merge. This version works
better for storytelling and scenarios, as the prose is more novel-like
and it has a tendency to impersonate the user character.
See the Alternate RP Focused version as well.
This is a merge of two G3 models, one trained on instruct and one trained on base:
- allura-org/Gemma-3-Glitter-12B - Itself a merge of a storywriting and RP train (both also by ToastyPigeon), on instruct
- ToastyPigeon/Gemma-3-Confetti-12B - Experimental application of the Glitter data using base instead of
instruct, additionally includes some adventure data in the form of
SpringDragon.
The result is a lovely blend of Glitter's ability to follow
instructions and Confetti's free-spirit prose, effectively 'loosening
up' much of the hesitancy that was left in Glitter.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q8_0-GGUF --hf-file gemma-3-starshine-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q8_0-GGUF --hf-file gemma-3-starshine-12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q8_0-GGUF --hf-file gemma-3-starshine-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q8_0-GGUF --hf-file gemma-3-starshine-12b-q8_0.gguf -c 2048
```
|
Triangle104/Gemma-3-Starshine-12B-Q6_K-GGUF | Triangle104 | 2025-05-03T13:32:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ToastyPigeon/Gemma-3-Starshine-12B",
"base_model:quantized:ToastyPigeon/Gemma-3-Starshine-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T13:30:12Z | ---
base_model: ToastyPigeon/Gemma-3-Starshine-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-Starshine-12B-Q6_K-GGUF
This model was converted to GGUF format from [`ToastyPigeon/Gemma-3-Starshine-12B`](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B) for more details on the model.
---
A creative writing model based on a merge of fine-tunes on Gemma 3 12B IT and Gemma 3 12B PT.
This is the Story Focused merge. This version works
better for storytelling and scenarios, as the prose is more novel-like
and it has a tendency to impersonate the user character.
See the Alternate RP Focused version as well.
This is a merge of two G3 models, one trained on instruct and one trained on base:
- allura-org/Gemma-3-Glitter-12B - Itself a merge of a storywriting and RP train (both also by ToastyPigeon), on instruct
- ToastyPigeon/Gemma-3-Confetti-12B - Experimental application of the Glitter data using base instead of
instruct, additionally includes some adventure data in the form of
SpringDragon.
The result is a lovely blend of Glitter's ability to follow
instructions and Confetti's free-spirit prose, effectively 'loosening
up' much of the hesitancy that was left in Glitter.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q6_K-GGUF --hf-file gemma-3-starshine-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q6_K-GGUF --hf-file gemma-3-starshine-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-Starshine-12B-Q6_K-GGUF --hf-file gemma-3-starshine-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-Starshine-12B-Q6_K-GGUF --hf-file gemma-3-starshine-12b-q6_k.gguf -c 2048
```
|
19uez/GRPO_llama3_2_3B_16_005_1k_part1 | 19uez | 2025-05-03T13:25:52Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T13:24:54Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HarethahMo/qwen2.5-3b-base-abliterated | HarethahMo | 2025-05-03T13:25:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T13:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
openfree/sergey-lazarev | openfree | 2025-05-03T13:23:17Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T13:23:12Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: 'A Sergey Lazarev as USA president, rainbow hair color, pink suit, 26K
'
output:
url: samples/1746278529264__000001111_0.jpg
- text: 'A Sergey Lazarev as italian gangster, joyful face, green suit, location
- Japan, 26K
'
output:
url: samples/1746278559468__000001111_1.jpg
- text: A Sergey Lazarev as a spain policeman, happy face, real photo, best quality,
26K
output:
url: samples/1746278589630__000001111_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Sergey Lazarev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# sergey-lazarev
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `Sergey Lazarev` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/openfree/sergey-lazarev/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('openfree/sergey-lazarev', weight_name='sergey-lazarev.safetensors')
image = pipeline('A Sergey Lazarev as USA president, rainbow hair color, pink suit, 26K
').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
cnfusion/UIGEN-T2-7B-mlx-fp16 | cnfusion | 2025-05-03T13:22:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"ui-generation",
"peft",
"lora",
"tailwind-css",
"html",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Tesslate/UIGEN-T2-7B",
"base_model:adapter:Tesslate/UIGEN-T2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T13:21:49Z | ---
base_model: Tesslate/UIGEN-T2-7B
tags:
- text-generation-inference
- transformers
- qwen2
- ui-generation
- peft
- lora
- tailwind-css
- html
- mlx
- mlx-my-repo
license: apache-2.0
language:
- en
---
# cnfusion/UIGEN-T2-7B-mlx-fp16
The Model [cnfusion/UIGEN-T2-7B-mlx-fp16](https://huggingface.co/cnfusion/UIGEN-T2-7B-mlx-fp16) was converted to MLX format from [Tesslate/UIGEN-T2-7B](https://huggingface.co/Tesslate/UIGEN-T2-7B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/UIGEN-T2-7B-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
cnfusion/UIGEN-T2-7B-mlx-8Bit | cnfusion | 2025-05-03T13:20:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"ui-generation",
"peft",
"lora",
"tailwind-css",
"html",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Tesslate/UIGEN-T2-7B",
"base_model:adapter:Tesslate/UIGEN-T2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-05-03T13:19:57Z | ---
base_model: Tesslate/UIGEN-T2-7B
tags:
- text-generation-inference
- transformers
- qwen2
- ui-generation
- peft
- lora
- tailwind-css
- html
- mlx
- mlx-my-repo
license: apache-2.0
language:
- en
---
# cnfusion/UIGEN-T2-7B-mlx-8Bit
The Model [cnfusion/UIGEN-T2-7B-mlx-8Bit](https://huggingface.co/cnfusion/UIGEN-T2-7B-mlx-8Bit) was converted to MLX format from [Tesslate/UIGEN-T2-7B](https://huggingface.co/Tesslate/UIGEN-T2-7B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/UIGEN-T2-7B-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
vangard703/single_image_11_tasks_single_tasks_codebase_L1_scale_5_tasks | vangard703 | 2025-05-03T13:19:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-03T13:16:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASethi04/meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001 | ASethi04 | 2025-05-03T13:17:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T12:32:31Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-legalbench-first-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/z3zgdd0s)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bbehrang/vaddosgus | bbehrang | 2025-05-03T13:13:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T12:42:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VADDOSGUS
---
# Vaddosgus
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VADDOSGUS` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VADDOSGUS",
"lora_weights": "https://huggingface.co/bbehrang/vaddosgus/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bbehrang/vaddosgus', weight_name='lora.safetensors')
image = pipeline('VADDOSGUS').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/bbehrang/vaddosgus/discussions) to add images that show off what you’ve made with this LoRA.
|
vangard703/single_image_11_tasks_three_tasks_qa_5_tasks | vangard703 | 2025-05-03T13:10:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-03T13:07:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EXt1/KMUTT-CPE35-thai-mt5base-summarizer | EXt1 | 2025-05-03T13:07:15Z | 344 | 1 | peft | [
"peft",
"safetensors",
"mt5",
"summarization",
"base_model:google/mt5-base",
"base_model:adapter:google/mt5-base",
"8-bit",
"bitsandbytes",
"region:us"
] | summarization | 2025-04-20T08:39:24Z | ---
base_model: google/mt5-base
library_name: peft
pipeline_tag: summarization
---
# KMUTT-CPE35-thai-mt5base-summarizer
This repository contains a fine-tuned version of google/mt5-base for the task of Thai text summarization. The model was trained on 20,000 samples from the ThaiSum dataset and is part of a senior project in the Computer Engineering Department at King Mongkut’s University of Technology Thonburi (KMUTT).
## Model Description
- **Base model:** google/mt5-base
- **Task:** Text Summarization (Thai)
- **Fine-tuning dataset:** ThaiSum (20k samples)
- **Quantization:** 8-bit
- **Max sequence length:** 512 tokens
## Evaluation
The performance of the model was evaluated using the ROUGE metric, which is commonly used for assessing the quality of summarization tasks. The evaluation results on the test set are as follows:
* ROUGE-1: 0.4498
* ROUGE-2: 0.2551
* ROUGE-L: 0.4481
* ROUGE-Lsum: 0.4501
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2 |
loc1105/qwen2-capydata-finetuned | loc1105 | 2025-05-03T13:04:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-03T13:04:14Z | ---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Subsets and Splits