modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
relaxml/Llama-3.1-8b-Instruct-QTIP-3Bit | relaxml | 2024-10-28T02:40:10Z | 30 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2024-10-05T17:45:52Z | 
|
ganga4364/mms_300_v4.96000 | ganga4364 | 2024-10-28T02:37:42Z | 189 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-28T02:37:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Primeness/DeezNutz6 | Primeness | 2024-10-28T02:31:49Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T01:27:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
playboy40k/flux-EmmaStoneLora | playboy40k | 2024-10-28T02:25:48Z | 90 | 3 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-10-28T02:23:41Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ComfyUI_Flux_Finetune_00094_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Emma Stone Flux
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/playboy40k/flux-EmmaStoneLora/tree/main) them in the Files & versions tab.
|
ndhananj/ndhananj-llama-3.2.Instruct | ndhananj | 2024-10-28T02:25:38Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T02:15:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model was a usses LLama3.2-1B-Instruct as a base. It does better **50%** than the same fintuning on ElutherAI/gpt-neo-1.3B on the HellaSwag benchmark for instruction following.
## Model Details
# Model Card
## Model Description
This is an ORPO fine-tune of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on a dataset of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## Evaluation Results
### Hellaswag for this model
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag| 1|none | 0|acc |↑ |0.4501|± |0.0050|
| | |none | 0|acc_norm|↑ |0.6072|± |0.0049|
### Hellaswag for same fine-tuning for ElutherAI/gpt-neo-1.3B
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag| 1|none | 0|acc |↑ |0.3853|± |0.0049|
| | |none | 0|acc_norm|↑ |0.4891|± |0.0050|
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
teoteo1993/lovepet_model | teoteo1993 | 2024-10-28T02:23:52Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-28T02:21:31Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** teoteo1993
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lucia-no/sn29_C00_O27_0 | Lucia-no | 2024-10-28T02:18:15Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T02:14:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jtupayac/gemma-2-9b-it-crag_new | jtupayac | 2024-10-28T02:13:25Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T02:09:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
theprint/WorldBuilder-7B | theprint | 2024-10-28T02:10:14Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T02:05:30Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
regunathanr/gemma-math-finetune-regu | regunathanr | 2024-10-28T02:02:21Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T01:55:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
godus81834/krx-meta-llama-3.1-8b-instruct | godus81834 | 2024-10-28T02:00:21Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"krx",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-20T08:52:37Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- krx
---
# Uploaded model
- **Developed by:** godus1201
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
crestf411/MS-sunfall-v0.7.0-gguf | crestf411 | 2024-10-28T01:57:04Z | 34 | 6 | null | [
"gguf",
"base_model:crestf411/MS-sunfall-v0.7.0",
"base_model:quantized:crestf411/MS-sunfall-v0.7.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-28T01:33:42Z | ---
base_model:
- crestf411/MS-sunfall-v0.7.0
--- |
ndhananj/ndhananj-gpt-neo-1.3B | ndhananj | 2024-10-28T01:53:05Z | 150 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T17:34:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model was a first pass test to see if a flow will create a model. Do not use it for real purposes.
## Model Details
# Model Card
## Model Description
This is an ORPO fine-tune of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on a dataset of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## Evaluation Results
### Hellaswag
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag| 1|none | 0|acc |↑ |0.3853|± |0.0049|
| | |none | 0|acc_norm|↑ |0.4891|± |0.0050|
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf | RichardErkhov | 2024-10-28T01:45:08Z | 8 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T15:18:08Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-Nemo-Instruct-2407-20b - GGUF
- Model creator: https://huggingface.co/win10/
- Original model: https://huggingface.co/win10/Mistral-Nemo-Instruct-2407-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-Nemo-Instruct-2407-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q2_K.gguf) | Q2_K | 8.01GB |
| [Mistral-Nemo-Instruct-2407-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q3_K_S.gguf) | Q3_K_S | 9.3GB |
| [Mistral-Nemo-Instruct-2407-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q3_K.gguf) | Q3_K | 10.3GB |
| [Mistral-Nemo-Instruct-2407-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q3_K_M.gguf) | Q3_K_M | 10.3GB |
| [Mistral-Nemo-Instruct-2407-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q3_K_L.gguf) | Q3_K_L | 11.17GB |
| [Mistral-Nemo-Instruct-2407-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.IQ4_XS.gguf) | IQ4_XS | 11.53GB |
| [Mistral-Nemo-Instruct-2407-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q4_0.gguf) | Q4_0 | 12.01GB |
| [Mistral-Nemo-Instruct-2407-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.IQ4_NL.gguf) | IQ4_NL | 12.14GB |
| [Mistral-Nemo-Instruct-2407-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q4_K_S.gguf) | Q4_K_S | 12.09GB |
| [Mistral-Nemo-Instruct-2407-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q4_K.gguf) | Q4_K | 12.73GB |
| [Mistral-Nemo-Instruct-2407-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q4_K_M.gguf) | Q4_K_M | 12.73GB |
| [Mistral-Nemo-Instruct-2407-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q4_1.gguf) | Q4_1 | 13.29GB |
| [Mistral-Nemo-Instruct-2407-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q5_0.gguf) | Q5_0 | 14.57GB |
| [Mistral-Nemo-Instruct-2407-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q5_K_S.gguf) | Q5_K_S | 14.57GB |
| [Mistral-Nemo-Instruct-2407-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q5_K.gguf) | Q5_K | 14.94GB |
| [Mistral-Nemo-Instruct-2407-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q5_K_M.gguf) | Q5_K_M | 14.94GB |
| [Mistral-Nemo-Instruct-2407-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q5_1.gguf) | Q5_1 | 15.85GB |
| [Mistral-Nemo-Instruct-2407-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q6_K.gguf) | Q6_K | 17.28GB |
| [Mistral-Nemo-Instruct-2407-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/win10_-_Mistral-Nemo-Instruct-2407-20b-gguf/blob/main/Mistral-Nemo-Instruct-2407-20b.Q8_0.gguf) | Q8_0 | 22.38GB |
Original model description:
---
base_model:
- unsloth/Mistral-Nemo-Instruct-2407
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 2]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [1, 3]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [2, 4]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [3, 5]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
# 以下是新增的層
- sources:
- layer_range: [4, 6]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [5, 7]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [6, 8]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [7, 9]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 10]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [9, 11]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [10, 12]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [11, 13]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [12, 14]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [13, 15]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [14, 16]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [15, 17]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [16, 18]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [17, 19]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [18, 20]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [19, 21]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [20, 22]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [21, 23]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [22, 24]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [23, 25]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 26]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [25, 27]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [26, 28]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [27, 29]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [28, 30]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [29, 31]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [30, 32]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [31, 33]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [32, 34]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [33, 35]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [34, 36]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [35, 37]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [36, 38]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [37, 39]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [38, 40]
model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
2point5p/krx-qwen2.5-7b-it-s-too-bad | 2point5p | 2024-10-28T01:40:54Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T00:35:59Z | ---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 2point5p
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
muhtasham/tajik-llama3-3b-merged-16bit | muhtasham | 2024-10-28T01:07:14Z | 82 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T01:05:29Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** muhtasham
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tlsdm65376/krxlaw_Meta-Llama-3.1-8B | tlsdm65376 | 2024-10-28T01:03:37Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T06:14:39Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- krx
---
# Uploaded model
- **Developed by:** tlsdm65376
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Primeness/DeezNutz5 | Primeness | 2024-10-28T01:02:05Z | 37 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:57:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vijay-ravichander/LMSYS-Gemma-9B-4bit | vijay-ravichander | 2024-10-28T01:00:26Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-classification | 2024-10-28T00:54:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
auskola/sentimientos | auskola | 2024-10-28T00:58:48Z | 12 | 0 | null | [
"safetensors",
"electra",
"text-classification",
"region:us"
] | text-classification | 2024-10-25T00:55:06Z | ---
pipeline_tag: text-classification
widget:
- text: "This movie was amazing! I loved it."
example_title: "Positive example"
- text: "This was a terrible waste of time."
example_title: "Negative example"
---
# Sentiment Analysis Model
## Model Details
- **Base Model**: google/electra-base-discriminator
- **Task**: Binary Sentiment Analysis (Positive/Negative)
- **Datasets**: IMDB and Amazon Reviews
- **Language**: English
## Training Hyperparameters
- **Batch Size**: 8
- **Learning Rate**: 2e-5
- **Number of Epochs**: 2
- **Max Sequence Length**: 128 tokens
- **Model Architecture**: ELECTRA (Discriminator)
## Training
The model was trained using a combination of IMDB and Amazon reviews datasets, using ELECTRA's discriminator architecture which is particularly efficient with limited data. The hyperparameters were optimized for performance on consumer-grade hardware.
## Usage
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "auskola/sentimientos"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
def analyze_sentiment(text):
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=1)
# Get prediction and confidence
prediction = torch.argmax(probabilities, dim=1)
confidence = torch.max(probabilities).item()
return {
"sentiment": "Positive" if prediction.item() == 1 else "Negative",
"confidence": confidence
}
# Ejemplos de uso
texts = [
"This product exceeded my expectations!",
"Terrible service, would not recommend",
"The movie was pretty good"
]
for text in texts:
result = analyze_sentiment(text)
print(f"\nText: {text}")
print(f"Sentiment: {result['sentiment']}")
print(f"Confidence: {result['confidence']:.2f}")
|
betteib/tn_updated_v9 | betteib | 2024-10-28T00:50:25Z | 135 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:55:22Z | ---
base_model: gpt2
library_name: transformers
license: mit
tags:
- generated_from_trainer
model-index:
- name: tn_updated_v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tn_updated_v9
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.1424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.97 | 2.5284 | 500 | 6.3858 |
| 4.6079 | 5.0569 | 1000 | 7.1778 |
| 3.4102 | 7.5853 | 1500 | 8.1424 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
MikeRoz/TheDrummer_Behemoth-123B-v1.1-2.5bpw-h6-exl2 | MikeRoz | 2024-10-28T00:41:01Z | 6 | 1 | null | [
"safetensors",
"mistral",
"license:other",
"exl2",
"region:us"
] | null | 2024-10-27T22:21:03Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 🦣 - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: WIP
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
ashercn97/deberta_v3_finetuned | ashercn97 | 2024-10-28T00:40:18Z | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"fill-mask",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-10-28T00:39:50Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta_v3_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_v3_finetuned
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1343 | 1.0 | 1406 | 4.5203 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
ramonactruta/ramonactruta-llama-3.2.Instruct-chat | ramonactruta | 2024-10-28T00:38:16Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-28T00:08:27Z | ---
library_name: transformers
tags:
- trl
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MatthewFrank/roberta-large_pytorch_AllData_V01 | MatthewFrank | 2024-10-28T00:18:50Z | 122 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T02:41:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Viscoke/call1 | Viscoke | 2024-10-27T23:54:53Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:51:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
erichennings/EH-sentiment-finetuned-Llama-3.2-1B-Instruct | erichennings | 2024-10-27T23:41:04Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:mteb/amazon_polarity",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T00:38:22Z | ---
library_name: transformers
license: llama3.2
datasets:
- mteb/amazon_polarity
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# Model Card for EH-sentiment-finetuned-Llama-3.2-1B-Instruct/
This is a test project, fine tuning Llama3.1-1B-Instruct for sentiment classification, using a subset of an amazon reviews dataset
[mteb/amazon_polarity](https://huggingface.co/datasets/mteb/amazon_polarity) and ORPO fine tuning.
The finetuned model achieves moderate +10% improvement on sentiment classification
(as measured by SST2 - which asks the model to classify sentences in a single word,
either 'positive' or 'neagtive'), without general performance being impacted
(as measured by hellaswag, which asks the model to complete a sentence with a sensible
response, chosen from a list of choices).
| Metric Category | Metric | Base Model | Finetuned Model | Change |
|---------------------|--------------------|----------------|-----------------|--------|
| Sentiment | SST2/acc | 0.68 | 0.75 | +10% |
| | | | | |
| General Completions | hellaswag/acc | 0.447 | 0.459 | +3% |
| | hellaswag/acc_norm | 0.550 | 0.560 | +2% |
The training dataset was the first 10k samples from mteb/amazon_polarity, and the model was trained for
5 epochs. The dataset was nearly balanced across positive and negative sentiment -
~51% of examples were negative.
The finetuning training examples used an SST-like prompt format (see Prompt Formats, below). An attempt was
also made to train using exactly the SST Eval format. Oddly, using the SST Eval format resulted in the
SST accuracy going down (0.54 for 10k samples and 1 epoch, -20% compared to the base model.)
This was unexpected, and weird, and probably would bear further investigation.
The model was much worse at correctly identifying positive sentiment (57% accuracy) than it was at
identifying negative sentiment (93% accuracy) - see Confusion Matrix, below. This performance on
negative sentiment is good - State of the Art for SST2 overall is 97%
(achieved by [T5-11B](https://huggingface.co/google-t5/t5-11b)).
Since the training dataset was balanced across positive and negative examples, this mismatch seems likely
to have been present in the base model, although this was not confirmed. Next steps for improvement
should be to verify that the behavior is inherited, and if so probably train with a larger
set of positive statements.
## Confusion Matrix
<img src="confusion-matrix.png" width="500" height="500" />
## Prompt Formats
**SST Eval**: The SST Eval uses prompts like this:
> A complete waste of time. Typographical errors, poor grammar, and a totally pathetic plot add up to absolutely nothing.
> I'm embarrassed for this author and very disappointed I actually paid for this book.
>
> Question: Is this sentence positive or negative?
> Answer:
**SST-like**: Training examples were formulated using an SST-like prompt:
> Below is an instruction that describes a task. Write a response that appropriately completes the request.
>
> ###Instruction:
> Determine the sentiment of the input sentence. Please respond as positive or negative.
> ###Input:
> The best soundtrack ever to anything.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Fintuned model for sentiment classification.
- **Developed by:** Eric Hennings
- **Finetuned from model [optional]:** meta-llama/Llama-3.2-1B-Instruct
### Model Sources [optional]
|
mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF | mradermacher | 2024-10-27T23:35:08Z | 86 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2",
"base_model:quantized:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-27T19:58:32Z | ---
base_model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ1_S.gguf) | i1-IQ1_S | 5.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ1_M.gguf) | i1-IQ1_M | 5.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ2_S.gguf) | i1-IQ2_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q2_K.gguf) | i1-Q2_K | 8.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ3_S.gguf) | i1-IQ3_S | 10.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ3_M.gguf) | i1-IQ3_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q4_0.gguf) | i1-Q4_0 | 13.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23B-V2.i1-Q6_K.gguf) | i1-Q6_K | 18.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
leekh7624/model4 | leekh7624 | 2024-10-27T23:29:51Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:leekh7624/model3",
"base_model:finetune:leekh7624/model3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:25:38Z | ---
base_model: leekh7624/model3
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** leekh7624
- **License:** apache-2.0
- **Finetuned from model :** leekh7624/model3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DewiBrynJones/whisper-cv-cy-train-all-plus-other-with-excluded-ft-cv-tts | DewiBrynJones | 2024-10-27T23:28:42Z | 6 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:techiaith/whisper-large-v3-ft-cv-cy",
"base_model:finetune:techiaith/whisper-large-v3-ft-cv-cy",
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T21:45:29Z | ---
license: apache-2.0
base_model: DewiBrynJones/whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-cv-cy-train-all-plus-other-with-excluded-ft-cv-tts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-cv-cy-train-all-plus-other-with-excluded-ft-cv-tts
This model is a fine-tuned version of [DewiBrynJones/whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded](https://huggingface.co/DewiBrynJones/whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded) on the DewiBrynJones/commonvoice_cy_tts train main dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2556
- Wer: 0.1934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.23 | 0.4583 | 1000 | 0.2574 | 0.1992 |
| 0.1775 | 0.9166 | 2000 | 0.2527 | 0.2015 |
| 0.0978 | 1.3749 | 3000 | 0.2559 | 0.1951 |
| 0.0902 | 1.8332 | 4000 | 0.2556 | 0.1934 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
lucaelin/llama-3.2-3b-instruct-fc-gguf | lucaelin | 2024-10-27T23:20:04Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-12T17:28:15Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** lucaelin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nicolofelicioni/pythia-1b-sft-hh-hts-7 | nicolofelicioni | 2024-10-27T23:13:55Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:10:11Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ntnxx2/vit-base-patch16-224-finetuned-Visual-Emotional | ntnxx2 | 2024-10-27T23:07:56Z | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-11-26T07:05:21Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-Visual-Emotional
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.65
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-Visual-Emotional
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0819
- Accuracy: 0.65
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.8696 | 5 | 2.1918 | 0.1125 |
| 2.1428 | 1.9130 | 11 | 2.1017 | 0.1625 |
| 2.1428 | 2.9565 | 17 | 1.9293 | 0.1875 |
| 1.8582 | 4.0 | 23 | 1.7163 | 0.325 |
| 1.8582 | 4.8696 | 28 | 1.5777 | 0.375 |
| 1.4818 | 5.9130 | 34 | 1.4303 | 0.45 |
| 1.1661 | 6.9565 | 40 | 1.3146 | 0.475 |
| 1.1661 | 8.0 | 46 | 1.2160 | 0.525 |
| 0.9421 | 8.8696 | 51 | 1.2096 | 0.55 |
| 0.9421 | 9.9130 | 57 | 1.1362 | 0.5875 |
| 0.8003 | 10.9565 | 63 | 1.1598 | 0.525 |
| 0.8003 | 12.0 | 69 | 1.0878 | 0.6 |
| 0.678 | 12.8696 | 74 | 1.0940 | 0.6375 |
| 0.5888 | 13.9130 | 80 | 1.0819 | 0.65 |
| 0.5888 | 14.9565 | 86 | 1.0700 | 0.625 |
| 0.5086 | 16.0 | 92 | 1.0758 | 0.625 |
| 0.5086 | 16.8696 | 97 | 1.0804 | 0.625 |
| 0.4454 | 17.9130 | 103 | 1.0704 | 0.6 |
| 0.4454 | 18.9565 | 109 | 1.1111 | 0.575 |
| 0.3758 | 20.0 | 115 | 1.0619 | 0.5875 |
| 0.3402 | 20.8696 | 120 | 1.0846 | 0.6125 |
| 0.3402 | 21.9130 | 126 | 1.1042 | 0.6125 |
| 0.3247 | 22.9565 | 132 | 1.0926 | 0.6375 |
| 0.3247 | 24.0 | 138 | 1.0908 | 0.625 |
| 0.3142 | 24.8696 | 143 | 1.0964 | 0.6 |
| 0.3142 | 25.9130 | 149 | 1.0999 | 0.6125 |
| 0.3081 | 26.9565 | 155 | 1.1036 | 0.625 |
| 0.276 | 27.8261 | 160 | 1.1019 | 0.625 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
freewheelye/mergekit-slerp-wmgydwq | freewheelye | 2024-10-27T23:05:47Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1",
"base_model:merge:ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1",
"base_model:OpenLLM-Ro/RoGemma2-9b-Instruct",
"base_model:merge:OpenLLM-Ro/RoGemma2-9b-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T23:01:09Z | ---
base_model:
- ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1
- OpenLLM-Ro/RoGemma2-9b-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1)
* [OpenLLM-Ro/RoGemma2-9b-Instruct](https://huggingface.co/OpenLLM-Ro/RoGemma2-9b-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: OpenLLM-Ro/RoGemma2-9b-Instruct
layer_range:
- 0
- 32
- model: ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1
layer_range:
- 0
- 32
merge_method: slerp
base_model: OpenLLM-Ro/RoGemma2-9b-Instruct
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
rshacter/ruthshacter-llama-3.2-1B-instruct-500-20-bnb | rshacter | 2024-10-27T22:49:33Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T22:45:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hz3519/TransformerBeta_models | hz3519 | 2024-10-27T22:46:08Z | 0 | 1 | null | [
"tag1",
"tag2",
"en",
"dataset:dataset1",
"dataset:dataset2",
"license:mit",
"region:us"
] | null | 2023-05-18T09:31:51Z | ---
language:
- "en"
thumbnail: "https://example.com/path/to/your/thumbnail.jpg" # URL to a thumbnail used in social sharing
tags:
- "tag1" # For example, "sentiment-analysis"
- "tag2" # For example, "machine-translation"
license: "mit"
datasets:
- "dataset1" # For example, "imdb"
- "dataset2" # For example, "wmt16"
metrics:
- "metric1" # For example, "accuracy"
- "metric2" # For example, "f1"
---
# TransformerBeta
## License
This model is distributed under the MIT license.
|
louisbrulenaudet/lemone-router-m | louisbrulenaudet | 2024-10-27T22:43:53Z | 21 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"sentence-transformers",
"feature-extraction",
"legal",
"taxation",
"fiscalité",
"tax",
"fr",
"dataset:louisbrulenaudet/code-impots",
"dataset:louisbrulenaudet/code-impots-annexe-iv",
"dataset:louisbrulenaudet/code-impots-annexe-iii",
"dataset:louisbrulenaudet/code-impots-annexe-i",
"dataset:louisbrulenaudet/code-impots-annexe-ii",
"dataset:louisbrulenaudet/livre-procedures-fiscales",
"dataset:louisbrulenaudet/bofip",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-21T20:02:01Z | ---
library_name: transformers
license: apache-2.0
base_model: intfloat/multilingual-e5-base
tags:
- generated_from_trainer
- sentence-transformers
- text-classification
- feature-extraction
- generated_from_trainer
- legal
- taxation
- fiscalité
- tax
metrics:
- accuracy
model-index:
- name: lemone-router
results: []
language:
- fr
pipeline_tag: text-classification
datasets:
- louisbrulenaudet/code-impots
- louisbrulenaudet/code-impots-annexe-iv
- louisbrulenaudet/code-impots-annexe-iii
- louisbrulenaudet/code-impots-annexe-i
- louisbrulenaudet/code-impots-annexe-ii
- louisbrulenaudet/livre-procedures-fiscales
- louisbrulenaudet/bofip
widget:
- text: "Quelles sont les modalités d'adoption d'un plan d'apurement échelonné par la commission chargée du recouvrement, et quelles sont les conditions qui s'imposent aux administrations et organismes chargés du recouvrement ainsi qu'au débiteur qui s'engage à le respecter ?"
example_title: "Contrôle et contentieux"
- text: "Quel régime fiscal est applicable aux opérations de crédit-bail portant sur des fonds de commerce, des fonds artisanaux, ou l'un de leurs éléments incorporels non amortissables, et quelles sont les conditions dans lesquelles les sommes correspondant à la quote-part de loyer ne constituent pas un élément du bénéfice imposable du bailleur et ne sont pas déductibles pour la détermination des résultats imposables du locataire ?"
example_title: "Bénéfices professionnels"
- text: "La succession s'ouvre par le décès dude cujus(code civil, art. 720). C'est donc le décès qui constitue le fait générateur de l'impôt. Dès lors, le tarif du droit et les règles applicables à sa liquidation sont celles en vigueur au jour du décès (en ce sens, Cass. com 7 janvier 1997 n° de pourvoi 95-11686). Toutefois, pour les legs sous condition suspensive (BOI-ENR-DMTG-10-10-10-10), les droits sont dus lors de la réalisation de la condition, d'après le régime fiscal applicable et la valeur des biens à cette époque (code général des impôts (CGI), art 676). Par ailleurs, pour les pénalités éventuellement exigibles, la loi applicable est celle en vigueur lors de la contravention. L'administration prouve le décès, en vue de la réclamation des droits, au moyen des registres de l'état civil dont les maires sont tenus de lui remettre un relevé trimestriel (LPF, art. L. 102 A). Elle peut aussi prouver la mutation par décès au moyen des présomptions légales de l'article 1881 du CGI et de l'article 1882 du CGI. Dans ce cas le fait générateur se place à la date à partir de laquelle la prise de possession est établie."
example_title: "Patrimoine et enregistrement"
- text: "Quelles sont les obligations déclaratives que les associés personnes physiques doivent respecter pour bénéficier de la réduction d'impôt accordée au titre des dépenses de restauration immobilière effectuées dans les sites patrimoniaux remarquables et les quartiers relevant de la politique de la ville, et quelles sont les pièces justificatives qui doivent être jointes à leur déclaration des revenus ?"
example_title: "Revenus particuliers"
---
<img src="assets/thumbnail.webp">
# Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation
Lemone-router is a series of classification models designed to produce an optimal multi-agent system for different branches of tax law. Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impôts :
```python
label2id = {
"Bénéfices professionnels": 0,
"Contrôle et contentieux": 1,
"Dispositifs transversaux": 2,
"Fiscalité des entreprises": 3,
"Patrimoine et enregistrement": 4,
"Revenus particuliers": 5,
"Revenus patrimoniaux": 6,
"Taxes sur la consommation": 7
}
id2label = {
0: "Bénéfices professionnels",
1: "Contrôle et contentieux",
2: "Dispositifs transversaux",
3: "Fiscalité des entreprises",
4: "Patrimoine et enregistrement",
5: "Revenus particuliers",
6: "Revenus patrimoniaux",
7: "Taxes sur la consommation"
}
```
This model is a fine-tuned version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base).
It achieves the following results on the evaluation set of 5000 texts:
- Loss: 0.4096
- Accuracy: 0.9265
### Usage
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("louisbrulenaudet/lemone-router-m")
model = AutoModelForSequenceClassification.from_pretrained("louisbrulenaudet/lemone-router-m")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.099463734610582e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 23
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5371 | 1.0 | 2809 | 0.4147 | 0.8680 |
| 0.3154 | 2.0 | 5618 | 0.3470 | 0.8914 |
| 0.2241 | 3.0 | 8427 | 0.3345 | 0.9147 |
| 0.1273 | 4.0 | 11236 | 0.3788 | 0.9187 |
| 0.0525 | 5.0 | 14045 | 0.4096 | 0.9265 |
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA H100 NVL
- **CPU Model**: AMD EPYC 9V84 96-Core Processor
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1
## Citation
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation},
year = {2024}
howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-router-m}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
louisbrulenaudet/lemone-router-l | louisbrulenaudet | 2024-10-27T22:43:07Z | 2,570 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"sentence-transformers",
"feature-extraction",
"legal",
"taxation",
"fiscalité",
"tax",
"fr",
"dataset:louisbrulenaudet/code-impots",
"dataset:louisbrulenaudet/code-impots-annexe-iv",
"dataset:louisbrulenaudet/code-impots-annexe-iii",
"dataset:louisbrulenaudet/code-impots-annexe-i",
"dataset:louisbrulenaudet/code-impots-annexe-ii",
"dataset:louisbrulenaudet/livre-procedures-fiscales",
"dataset:louisbrulenaudet/bofip",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-23T01:47:00Z | ---
library_name: transformers
license: apache-2.0
base_model: intfloat/multilingual-e5-base
tags:
- generated_from_trainer
- sentence-transformers
- text-classification
- feature-extraction
- generated_from_trainer
- legal
- taxation
- fiscalité
- tax
metrics:
- accuracy
model-index:
- name: lemone-router
results: []
language:
- fr
pipeline_tag: text-classification
datasets:
- louisbrulenaudet/code-impots
- louisbrulenaudet/code-impots-annexe-iv
- louisbrulenaudet/code-impots-annexe-iii
- louisbrulenaudet/code-impots-annexe-i
- louisbrulenaudet/code-impots-annexe-ii
- louisbrulenaudet/livre-procedures-fiscales
- louisbrulenaudet/bofip
widget:
- text: "Quelles sont les modalités d'adoption d'un plan d'apurement échelonné par la commission chargée du recouvrement, et quelles sont les conditions qui s'imposent aux administrations et organismes chargés du recouvrement ainsi qu'au débiteur qui s'engage à le respecter ?"
example_title: "Contrôle et contentieux"
- text: "Quel régime fiscal est applicable aux opérations de crédit-bail portant sur des fonds de commerce, des fonds artisanaux, ou l'un de leurs éléments incorporels non amortissables, et quelles sont les conditions dans lesquelles les sommes correspondant à la quote-part de loyer ne constituent pas un élément du bénéfice imposable du bailleur et ne sont pas déductibles pour la détermination des résultats imposables du locataire ?"
example_title: "Bénéfices professionnels"
- text: "La succession s'ouvre par le décès dude cujus(code civil, art. 720). C'est donc le décès qui constitue le fait générateur de l'impôt. Dès lors, le tarif du droit et les règles applicables à sa liquidation sont celles en vigueur au jour du décès (en ce sens, Cass. com 7 janvier 1997 n° de pourvoi 95-11686). Toutefois, pour les legs sous condition suspensive (BOI-ENR-DMTG-10-10-10-10), les droits sont dus lors de la réalisation de la condition, d'après le régime fiscal applicable et la valeur des biens à cette époque (code général des impôts (CGI), art 676). Par ailleurs, pour les pénalités éventuellement exigibles, la loi applicable est celle en vigueur lors de la contravention. L'administration prouve le décès, en vue de la réclamation des droits, au moyen des registres de l'état civil dont les maires sont tenus de lui remettre un relevé trimestriel (LPF, art. L. 102 A). Elle peut aussi prouver la mutation par décès au moyen des présomptions légales de l'article 1881 du CGI et de l'article 1882 du CGI. Dans ce cas le fait générateur se place à la date à partir de laquelle la prise de possession est établie."
example_title: "Patrimoine et enregistrement"
- text: "Quelles sont les obligations déclaratives que les associés personnes physiques doivent respecter pour bénéficier de la réduction d'impôt accordée au titre des dépenses de restauration immobilière effectuées dans les sites patrimoniaux remarquables et les quartiers relevant de la politique de la ville, et quelles sont les pièces justificatives qui doivent être jointes à leur déclaration des revenus ?"
example_title: "Revenus particuliers"
---
<img src="assets/thumbnail.webp">
# Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation
Lemone-router is a series of classification models designed to produce an optimal multi-agent system for different branches of tax law. Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impôts :
```python
label2id = {
"Bénéfices professionnels": 0,
"Contrôle et contentieux": 1,
"Dispositifs transversaux": 2,
"Fiscalité des entreprises": 3,
"Patrimoine et enregistrement": 4,
"Revenus particuliers": 5,
"Revenus patrimoniaux": 6,
"Taxes sur la consommation": 7
}
id2label = {
0: "Bénéfices professionnels",
1: "Contrôle et contentieux",
2: "Dispositifs transversaux",
3: "Fiscalité des entreprises",
4: "Patrimoine et enregistrement",
5: "Revenus particuliers",
6: "Revenus patrimoniaux",
7: "Taxes sur la consommation"
}
```
This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large).
It achieves the following results on the evaluation set:
- Loss: 0.4734
- Accuracy: 0.9191
### Usage
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("louisbrulenaudet/lemone-router-l")
model = AutoModelForSequenceClassification.from_pretrained("louisbrulenaudet/lemone-router-l")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6763799752474963e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6402 | 1.0 | 11233 | 0.6569 | 0.8630 |
| 0.5031 | 2.0 | 22466 | 0.5058 | 0.9025 |
| 0.2196 | 3.0 | 33699 | 0.4734 | 0.9191 |
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA H100 NVL
- **CPU Model**: AMD EPYC 9V84 96-Core Processor
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1
## Citation
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Lemone-Router: A Series of Fine-Tuned Classification Models for French Taxation},
year = {2024}
howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/lemone-router-l}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
|
RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf | RichardErkhov | 2024-10-27T22:41:15Z | 94 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T20:01:51Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-SUN-2.5B-chat - GGUF
- Model creator: https://huggingface.co/meditsolutions/
- Original model: https://huggingface.co/meditsolutions/Llama-3.2-SUN-2.5B-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-SUN-2.5B-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q2_K.gguf) | Q2_K | 0.95GB |
| [Llama-3.2-SUN-2.5B-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q3_K_S.gguf) | Q3_K_S | 1.09GB |
| [Llama-3.2-SUN-2.5B-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q3_K.gguf) | Q3_K | 1.18GB |
| [Llama-3.2-SUN-2.5B-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q3_K_M.gguf) | Q3_K_M | 1.18GB |
| [Llama-3.2-SUN-2.5B-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q3_K_L.gguf) | Q3_K_L | 1.26GB |
| [Llama-3.2-SUN-2.5B-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.IQ4_XS.gguf) | IQ4_XS | 1.32GB |
| [Llama-3.2-SUN-2.5B-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q4_0.gguf) | Q4_0 | 1.37GB |
| [Llama-3.2-SUN-2.5B-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.IQ4_NL.gguf) | IQ4_NL | 1.38GB |
| [Llama-3.2-SUN-2.5B-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q4_K_S.gguf) | Q4_K_S | 1.37GB |
| [Llama-3.2-SUN-2.5B-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q4_K.gguf) | Q4_K | 1.43GB |
| [Llama-3.2-SUN-2.5B-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q4_K_M.gguf) | Q4_K_M | 1.43GB |
| [Llama-3.2-SUN-2.5B-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q4_1.gguf) | Q4_1 | 1.49GB |
| [Llama-3.2-SUN-2.5B-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q5_0.gguf) | Q5_0 | 1.62GB |
| [Llama-3.2-SUN-2.5B-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q5_K_S.gguf) | Q5_K_S | 1.62GB |
| [Llama-3.2-SUN-2.5B-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q5_K.gguf) | Q5_K | 1.66GB |
| [Llama-3.2-SUN-2.5B-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q5_K_M.gguf) | Q5_K_M | 1.66GB |
| [Llama-3.2-SUN-2.5B-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q5_1.gguf) | Q5_1 | 1.75GB |
| [Llama-3.2-SUN-2.5B-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q6_K.gguf) | Q6_K | 1.9GB |
| [Llama-3.2-SUN-2.5B-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/meditsolutions_-_Llama-3.2-SUN-2.5B-chat-gguf/blob/main/Llama-3.2-SUN-2.5B-chat.Q8_0.gguf) | Q8_0 | 2.45GB |
Original model description:
---
language:
- en
license: llama3.2
library_name: transformers
base_model:
- meta-llama/Llama-3.2-1B-Instruct
datasets:
- argilla/OpenHermesPreferences
- argilla/magpie-ultra-v0.1
- argilla/Capybara-Preferences-Filtered
- mlabonne/open-perfectblend
- HuggingFaceTB/everyday-conversations-llama3.1-2k
- WizardLMTeam/WizardLM_evol_instruct_V2_196k
- ProlificAI/social-reasoning-rlhf
pipeline_tag: text-generation
---
# MedIT SUN 2.5B
<div align="center">
<img src="https://i.ibb.co/PF0TdMJ/imagine-image-9a56cee7-0f4f-4cc2-b265-a5b8d04f266b.png" alt="Llama-3.2-MedIT-SUN-2.5B" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
**Base Model**
- Llama 3.2 1B
**Extended Size**
- 1B to 2.5B parameters
**Extension Method**
- Proprietary technique developed by MedIT Solutions
**Fine-tuning**
- Open (or open subsets allowing for commercial use) open datasets from HF
- Open (or open subsets allowing for commercial use) SFT datasets from HF
**Training Status**
- Current version: chat-1.0.0
**Key Features**
- Built on Llama 3.2 architecture
- Expanded from 1B to 2.47B parameters
- Optimized for open-ended conversations
- Incorporates supervised fine-tuning for improved performance
**Use Case**
- General conversation and task-oriented interactions
**Limitations**
As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
**Disclaimer and Safety Considerations**
The Model is designed to be used as a smart assistant but not as a knowledge source within your applications, systems, or environments. It is not intended to provide 100% accurate answers, especially in scenarios where high precision and accuracy are
|
Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct | Vikhrmodels | 2024-10-27T22:39:42Z | 2,051 | 14 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"arxiv:2405.13929",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-05T16:08:13Z | ---
library_name: transformers
model_name: Vikhr-Qwen-2.5-0.5b-Instruct
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
language:
- ru
- en
license: apache-2.0
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
---
# 💨📟 Vikhr-Qwen-2.5-0.5B-Instruct
#### RU
Инструктивная модель на основе **Qwen-2.5-0.5B-Instruct**, обученная на русскоязычном датасете **GrandMaster-PRO-MAX**. В **4 раза эффективнее** базовой модели, и идеально подходит для запуска на слабых мобильных устройствах.
#### EN
Instructive model based on **Qwen-2.5-0.5B-Instruct**, trained on the Russian-language dataset **GrandMaster-PRO-MAX**. It is **4 times more efficient** than the base model, making it perfect for deployment on low-end mobile devices.
## GGUF
- [Vikhrmodels/Vikhr-Qwen-2.5-0.5B-instruct-GGUF](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-0.5B-instruct-GGUF)
## Особенности:
- 📚 Основа / Base: [Qwen-2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
- 🇷🇺 Специализация / Specialization: **RU**
- 💾 Датасет / Dataset: [GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX)
## Попробовать / Try now:
[](https://colab.research.google.com/drive/1bJpLmplDGkMbfOLO2CH6IO-2uUZEaknf?usp=sharing)
## Описание:
#### RU
**Vikhr-Qwen-2.5-0.5B-instruct** — это компактная языковая модель, обученная на датасете **GrandMaster-PRO-MAX**, специально доученная для обработки русского языка. Эффективность модели **в 4 раза** превышает базовую модель, а её размер составляет **1ГБ** , что делает её отличным выбором для запуска на слабых мобильных устройствах.
#### EN
**Vikhr-Qwen-2.5-0.5B-instruct** is a compact language model trained on the **GrandMaster-PRO-MAX** dataset, specifically designed for processing the Russian language. Its efficiency is **4 times** higher than the base model, and its size is **1GB**, making it an excellent choice for deployment on low-end mobile devices.
## Обучение / Train:
#### RU
Для создания **Vikhr-Qwen-2.5-0.5B-Instruct** использовался метод SFT (Supervised Fine-Tuning). Мы обучили модель на синтетическом датасете **Vikhrmodels/GrandMaster-PRO-MAX** (150k инструкций) с поддержкой CoT (Chain-Of-Thought), используя промпты для GPT-4-turbo.
#### EN
To create **Vikhr-Qwen-2.5-0.5B-Instruct**, the SFT (Supervised Fine-Tuning) method was used. We trained the model on a synthetic dataset **Vikhrmodels/GrandMaster-PRO-MAX** (150k instructions) with support for CoT (Chain-Of-Thought), utilizing prompts for GPT-4-turbo.
## Пример кода для запуска / Sample code to run:
**Рекомендуемая температура для генерации: 0.3** / **Recommended generation temperature: 0.3**.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Загрузка модели и токенизатора
model_name = "Vikhrmodels/Vikhr-Qwen-2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Подготовка входного текста
input_text = "Напиши очень краткую рецензию о книге Гарри Поттер."
messages = [
{"role": "system", "content": "Вы - Vikhr, помощник с искусственным интеллектом, созданный компанией Vikhr models, чтобы быть полезным, безобидным и честным."},
{"role": "user", "content": input_text},
]
# Токенизация и генерация текста
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
input_ids,
max_length=1512,
temperature=0.3,
num_return_sequences=1,
no_repeat_ngram_size=2,
top_k=50,
top_p=0.95,
)
# Декодирование и вывод результата
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
#### Ответ модели / Model response:
>Книга "Гарри Поттер" – это серия книг, написанных британским писателем Джоан Роулинг. Это одно из самых известных произведений в мире литературы и популярного детского творчества.
>
>**Основные черты серии:**
>
>1. **Сюжет:** События разворачиваются вокруг мальчика по имени Гарри Поттер, который учится в Школе волшебства и философии в Университете Хогвартс. Он сталкивается с различными препятствиями, включая борьбу со злом, поиск друзей и самопознание.
>
>2. **Персонажи:** В книге представлены множество персонажей, каждый из которых имеет свои уникальные черты характера, мотивации и прошлое. Главный герой, Гарри Поттер, является примером доброго и смелого человека, а также необычной личностью.
>
>3. **Темы и идеи:** Рассказы книги затрагивают темы любви, дружбы, справедливости, морали, человеческой неповиновенности и важности обучения через приключения.
>
>4. **История и развитие персонажей:** Через события и взаимодействие с другими персонажами книга исследует глубокие психологические и философские вопросы.
>
>5. **Влияние на культуру:** "Гарри Поттер" оказал огромное влияние на мировую литературу, превратившись в культовый жанр и символ знаний и мудрости.
>
>6. **Доступность:** Книги серии доступны для широкой аудитории и пользуются большим спросом, что делает их популярным выбором среди читателей всех возрастов.
>
>7. **Развитие жанра:** Несмотря на то что "Гарри Поттер" является частью серии, он продолжает быть любимым и актуальным, так как продолжает удивлять читателей новыми историями и персонажами.
>
>Эта серия книг остается одной из самых значительных и влиятельных в истории литературы, оказав влияние на развитие мировой культуры и образование.
### Авторы / Authors
- Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), [Vikhr Team](https://t.me/vikhrlabs)
- Nikolay Kompanets, [LakoMoor](https://t.me/lakomoor), [Vikhr Team](https://t.me/vikhrlabs)
- Konstantin Korolev, [Vikhr Team](https://t.me/vikhrlabs)
- Aleksandr Nikolich, [Vikhr Team](https://t.me/vikhrlabs)
```
@article{nikolich2024vikhr,
title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
journal={arXiv preprint arXiv:2405.13929},
year={2024},
url={https://arxiv.org/pdf/2405.13929}
}
``` |
RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf | RichardErkhov | 2024-10-27T22:37:26Z | 10 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T20:28:36Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-2-2.7B-Instruct-Medical-Conversational - GGUF
- Model creator: https://huggingface.co/MiniMedMind/
- Original model: https://huggingface.co/MiniMedMind/Phi-2-2.7B-Instruct-Medical-Conversational/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q2_K.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q2_K.gguf) | Q2_K | 1.03GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K.gguf) | Q3_K | 1.33GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K_M.gguf) | Q3_K_M | 1.33GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q3_K_L.gguf) | Q3_K_L | 1.47GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q4_0.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q4_0.gguf) | Q4_0 | 1.49GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q4_K_S.gguf) | Q4_K_S | 1.51GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q4_K.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q4_K.gguf) | Q4_K | 1.62GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q4_K_M.gguf) | Q4_K_M | 1.62GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q4_1.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q4_1.gguf) | Q4_1 | 1.65GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q5_0.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q5_0.gguf) | Q5_0 | 1.8GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q5_K.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q5_K.gguf) | Q5_K | 1.87GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q5_K_M.gguf) | Q5_K_M | 1.87GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q5_1.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q5_1.gguf) | Q5_1 | 1.95GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q6_K.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q6_K.gguf) | Q6_K | 2.13GB |
| [Phi-2-2.7B-Instruct-Medical-Conversational.Q8_0.gguf](https://huggingface.co/RichardErkhov/MiniMedMind_-_Phi-2-2.7B-Instruct-Medical-Conversational-gguf/blob/main/Phi-2-2.7B-Instruct-Medical-Conversational.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf | RichardErkhov | 2024-10-27T22:32:43Z | 25 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T20:28:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hercules-Mini-1.8B - GGUF
- Model creator: https://huggingface.co/M4-ai/
- Original model: https://huggingface.co/M4-ai/Hercules-Mini-1.8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hercules-Mini-1.8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q2_K.gguf) | Q2_K | 0.79GB |
| [Hercules-Mini-1.8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q3_K_S.gguf) | Q3_K_S | 0.89GB |
| [Hercules-Mini-1.8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q3_K.gguf) | Q3_K | 0.95GB |
| [Hercules-Mini-1.8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q3_K_M.gguf) | Q3_K_M | 0.95GB |
| [Hercules-Mini-1.8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q3_K_L.gguf) | Q3_K_L | 0.98GB |
| [Hercules-Mini-1.8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.IQ4_XS.gguf) | IQ4_XS | 1.01GB |
| [Hercules-Mini-1.8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q4_0.gguf) | Q4_0 | 1.04GB |
| [Hercules-Mini-1.8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.IQ4_NL.gguf) | IQ4_NL | 1.05GB |
| [Hercules-Mini-1.8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q4_K_S.gguf) | Q4_K_S | 1.08GB |
| [Hercules-Mini-1.8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q4_K.gguf) | Q4_K | 1.13GB |
| [Hercules-Mini-1.8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q4_K_M.gguf) | Q4_K_M | 1.13GB |
| [Hercules-Mini-1.8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q4_1.gguf) | Q4_1 | 1.13GB |
| [Hercules-Mini-1.8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q5_0.gguf) | Q5_0 | 1.22GB |
| [Hercules-Mini-1.8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q5_K_S.gguf) | Q5_K_S | 1.24GB |
| [Hercules-Mini-1.8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q5_K.gguf) | Q5_K | 1.28GB |
| [Hercules-Mini-1.8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q5_K_M.gguf) | Q5_K_M | 1.28GB |
| [Hercules-Mini-1.8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q5_1.gguf) | Q5_1 | 1.31GB |
| [Hercules-Mini-1.8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q6_K.gguf) | Q6_K | 1.47GB |
| [Hercules-Mini-1.8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_Hercules-Mini-1.8B-gguf/blob/main/Hercules-Mini-1.8B.Q8_0.gguf) | Q8_0 | 1.82GB |
Original model description:
---
library_name: transformers
license: other
datasets:
- Locutusque/hercules-v4.0
language:
- en
inference:
parameters:
do_sample: true
temperature: 1
top_p: 0.7
top_k: 4
max_new_tokens: 250
repetition_penalty: 1.1
---
# Hercules-Mini-1.8B
<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned Qwen1.5-1.8B on Locutusque's Hercules-v4.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using 700,000 examples of Hercules-v4.
- **Developed by:** M4-ai
- **Language(s) (NLP):** English and maybe Chinese
- **License:** tongyi-qianwen license
- **Finetuned from model:** [Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
General purpose assistant, question answering, chain-of-thought, etc..
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The eos token was not setup properly, so to prevent infinite generation you'll need to implement a stopping criteria when the model generates the <|im_end|> token.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Evaluation
Coming soon
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Locutusque/hercules-v4.0
#### Training Hyperparameters
- **Training regime:** bf16 non-mixed precision
## Technical Specifications
#### Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 256 and sequence length of 1536
## Contributions
Thanks to @Tonic, @aloobun, @fhai50032, and @Locutusque for their contributions to this model.
|
drewwas/OpenMachine_FlashNorm | drewwas | 2024-10-27T22:30:17Z | 6 | 0 | null | [
"safetensors",
"llama",
"en",
"arxiv:2407.09577",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:mit",
"region:us"
] | null | 2024-10-18T06:59:29Z | ---
license: mit
language:
- en
base_model:
- meta-llama/Llama-3.2-1B
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Finetune of LLaMa 3.2 1B model to include flashnormalization (https://arxiv.org/abs/2407.09577)
- **Developed by:** OpenMachine Labs
- **License:** MIT
- **Finetuned from model** Meta LLaMa 3.2 1B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/meta-llama/llama-models/tree/main/models/llama3_2
- **Paper** https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## How to Get Started with the Model
Use the code below to get started with the model.
#### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
## Model Card Authors
Nils Graef ([email protected])
Drew Wasielewski ([email protected])
|
ahmedheakl/asm2asm_bart-large_base_O0_702k_2ep | ahmedheakl | 2024-10-27T22:29:14Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-27T22:27:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf | RichardErkhov | 2024-10-27T22:27:28Z | 21 | 0 | null | [
"gguf",
"arxiv:2404.03608",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T20:28:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Sailor-1.8B-Chat - GGUF
- Model creator: https://huggingface.co/sail/
- Original model: https://huggingface.co/sail/Sailor-1.8B-Chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Sailor-1.8B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q2_K.gguf) | Q2_K | 0.79GB |
| [Sailor-1.8B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q3_K_S.gguf) | Q3_K_S | 0.89GB |
| [Sailor-1.8B-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q3_K.gguf) | Q3_K | 0.95GB |
| [Sailor-1.8B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q3_K_M.gguf) | Q3_K_M | 0.95GB |
| [Sailor-1.8B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q3_K_L.gguf) | Q3_K_L | 0.98GB |
| [Sailor-1.8B-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.IQ4_XS.gguf) | IQ4_XS | 1.01GB |
| [Sailor-1.8B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q4_0.gguf) | Q4_0 | 1.04GB |
| [Sailor-1.8B-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.IQ4_NL.gguf) | IQ4_NL | 1.05GB |
| [Sailor-1.8B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q4_K_S.gguf) | Q4_K_S | 1.08GB |
| [Sailor-1.8B-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q4_K.gguf) | Q4_K | 1.13GB |
| [Sailor-1.8B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q4_K_M.gguf) | Q4_K_M | 1.13GB |
| [Sailor-1.8B-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q4_1.gguf) | Q4_1 | 1.13GB |
| [Sailor-1.8B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q5_0.gguf) | Q5_0 | 1.22GB |
| [Sailor-1.8B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q5_K_S.gguf) | Q5_K_S | 1.24GB |
| [Sailor-1.8B-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q5_K.gguf) | Q5_K | 1.28GB |
| [Sailor-1.8B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q5_K_M.gguf) | Q5_K_M | 1.28GB |
| [Sailor-1.8B-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q5_1.gguf) | Q5_1 | 1.31GB |
| [Sailor-1.8B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q6_K.gguf) | Q6_K | 1.47GB |
| [Sailor-1.8B-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/sail_-_Sailor-1.8B-Chat-gguf/blob/main/Sailor-1.8B-Chat.Q8_0.gguf) | Q8_0 | 1.82GB |
Original model description:
---
language:
- en
- zh
- id
- th
- vi
- ms
- lo
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
widget:
- text: "如何制作烤鱼?"
example_title: "Chinese"
- text: "How to bake fish?"
example_title: "English"
- text: "Bagaimana cara memanggang ikan?"
example_title: "Malay"
- text: "วิธีย่างปลา?"
example_title: "Thai"
- text: "Bagaimana membuat bakaran ikan?"
example_title: "Indonesian"
- text: "Làm thế nào để nướng cá?"
example_title: "Vietnamese"
license: apache-2.0
base_model: sail/Sailor-1.8B
inference: false
---
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
## Requirements
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'sail/Sailor-1.8B-Chat',
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-1.8B-Chat')
system_prompt= 'You are a helpful assistant'
prompt = "Beri saya pengenalan singkat tentang model bahasa besar."
# prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn."
# prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่"
messages = [
{"role": "system", "content": system_prompt},
{"role": "question", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
input_ids = model_inputs.input_ids.to(device)
generated_ids = model.generate(
input_ids,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@article{dou2024sailor,
title={Sailor: Open Language Models for South-East Asia},
author={Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Lu, Wei and Lin, Min},
journal={arXiv preprint arXiv:2404.03608},
year={2024}
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
|
RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf | RichardErkhov | 2024-10-27T22:25:59Z | 121 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T19:51:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi2_2.2B_mergkit_prunme - GGUF
- Model creator: https://huggingface.co/thucdangvan020999/
- Original model: https://huggingface.co/thucdangvan020999/phi2_2.2B_mergkit_prunme/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi2_2.2B_mergkit_prunme.Q2_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q2_K.gguf) | Q2_K | 0.84GB |
| [phi2_2.2B_mergkit_prunme.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q3_K_S.gguf) | Q3_K_S | 0.94GB |
| [phi2_2.2B_mergkit_prunme.Q3_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q3_K.gguf) | Q3_K | 1.07GB |
| [phi2_2.2B_mergkit_prunme.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q3_K_M.gguf) | Q3_K_M | 1.07GB |
| [phi2_2.2B_mergkit_prunme.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q3_K_L.gguf) | Q3_K_L | 1.18GB |
| [phi2_2.2B_mergkit_prunme.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.IQ4_XS.gguf) | IQ4_XS | 1.15GB |
| [phi2_2.2B_mergkit_prunme.Q4_0.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q4_0.gguf) | Q4_0 | 1.2GB |
| [phi2_2.2B_mergkit_prunme.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.IQ4_NL.gguf) | IQ4_NL | 1.21GB |
| [phi2_2.2B_mergkit_prunme.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q4_K_S.gguf) | Q4_K_S | 1.22GB |
| [phi2_2.2B_mergkit_prunme.Q4_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q4_K.gguf) | Q4_K | 1.31GB |
| [phi2_2.2B_mergkit_prunme.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q4_K_M.gguf) | Q4_K_M | 1.31GB |
| [phi2_2.2B_mergkit_prunme.Q4_1.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q4_1.gguf) | Q4_1 | 1.33GB |
| [phi2_2.2B_mergkit_prunme.Q5_0.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q5_0.gguf) | Q5_0 | 1.45GB |
| [phi2_2.2B_mergkit_prunme.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q5_K_S.gguf) | Q5_K_S | 1.45GB |
| [phi2_2.2B_mergkit_prunme.Q5_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q5_K.gguf) | Q5_K | 1.5GB |
| [phi2_2.2B_mergkit_prunme.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q5_K_M.gguf) | Q5_K_M | 1.5GB |
| [phi2_2.2B_mergkit_prunme.Q5_1.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q5_1.gguf) | Q5_1 | 1.57GB |
| [phi2_2.2B_mergkit_prunme.Q6_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q6_K.gguf) | Q6_K | 1.71GB |
| [phi2_2.2B_mergkit_prunme.Q8_0.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_phi2_2.2B_mergkit_prunme-gguf/blob/main/phi2_2.2B_mergkit_prunme.Q8_0.gguf) | Q8_0 | 2.21GB |
Original model description:
---
base_model:
- microsoft/phi-2
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 21]
model: microsoft/phi-2
- sources:
- layer_range: [28, 32]
model: microsoft/phi-2
```
|
1g0rrr/paper_painting | 1g0rrr | 2024-10-27T22:09:25Z | 10 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-10-27T22:09:07Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
joe611/chickens-composite-201616161616-150-epochs-w-transform | joe611 | 2024-10-27T22:02:29Z | 42 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-26T22:59:28Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: chickens-composite-201616161616-150-epochs-w-transform
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chickens-composite-201616161616-150-epochs-w-transform
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2864
- Map: 0.7992
- Map 50: 0.9637
- Map 75: 0.8989
- Map Small: 0.3428
- Map Medium: 0.8051
- Map Large: 0.8153
- Mar 1: 0.3162
- Mar 10: 0.8378
- Mar 100: 0.843
- Mar Small: 0.4381
- Mar Medium: 0.8463
- Mar Large: 0.8551
- Map Chicken: 0.7833
- Mar 100 Chicken: 0.8298
- Map Duck: 0.747
- Mar 100 Duck: 0.7979
- Map Plant: 0.8672
- Mar 100 Plant: 0.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Chicken | Map Duck | Map Large | Map Medium | Map Plant | Map Small | Mar 1 | Mar 10 | Mar 100 | Mar 100 Chicken | Mar 100 Duck | Mar 100 Plant | Mar Large | Mar Medium | Mar Small |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:-----------:|:--------:|:---------:|:----------:|:---------:|:---------:|:------:|:------:|:-------:|:---------------:|:------------:|:-------------:|:---------:|:----------:|:---------:|
| 1.3747 | 1.0 | 500 | 1.3787 | 0.1018 | 0.1491 | 0.1155 | 0.0363 | 0.0 | 0.1826 | 0.0495 | 0.2691 | 0.006 | 0.0524 | 0.2694 | 0.355 | 0.3643 | 0.0 | 0.7006 | 0.3925 | 0.3288 | 0.0262 |
| 1.2078 | 2.0 | 1000 | 1.2359 | 0.2048 | 0.2894 | 0.2385 | 0.0858 | 0.0 | 0.2865 | 0.1101 | 0.5287 | 0.0066 | 0.086 | 0.3365 | 0.4465 | 0.604 | 0.0 | 0.7355 | 0.4846 | 0.4144 | 0.0895 |
| 1.0716 | 3.0 | 1500 | 1.0378 | 0.2591 | 0.3743 | 0.304 | 0.1358 | 0.0 | 0.2992 | 0.1831 | 0.6414 | 0.0233 | 0.1024 | 0.3601 | 0.3717 | 0.3921 | 0.0 | 0.723 | 0.3834 | 0.3414 | 0.0519 |
| 1.0097 | 4.0 | 2000 | 0.9668 | 0.2911 | 0.4199 | 0.3426 | 0.2048 | 0.0 | 0.3254 | 0.2382 | 0.6684 | 0.0787 | 0.1131 | 0.3961 | 0.4091 | 0.4976 | 0.0 | 0.7297 | 0.4087 | 0.3777 | 0.1333 |
| 0.6756 | 5.0 | 2500 | 0.8939 | 0.3274 | 0.4611 | 0.3732 | 0.2788 | 0.0 | 0.3693 | 0.2915 | 0.7034 | 0.0597 | 0.1245 | 0.449 | 0.4744 | 0.6635 | 0.0 | 0.7597 | 0.4895 | 0.453 | 0.1271 |
| 0.814 | 6.0 | 3000 | 0.8398 | 0.3292 | 0.4681 | 0.3844 | 0.3025 | 0.0 | 0.373 | 0.2896 | 0.6851 | 0.0637 | 0.1184 | 0.4607 | 0.4753 | 0.6802 | 0.0 | 0.7458 | 0.5049 | 0.4418 | 0.1148 |
| 0.8875 | 7.0 | 3500 | 1.0039 | 0.3382 | 0.5017 | 0.3967 | 0.3663 | 0.0 | 0.359 | 0.2988 | 0.6484 | 0.0382 | 0.1234 | 0.4309 | 0.4331 | 0.6056 | 0.0 | 0.6936 | 0.4519 | 0.3978 | 0.0867 |
| 0.9457 | 8.0 | 4000 | 0.7726 | 0.3549 | 0.5102 | 0.4198 | 0.3821 | 0.0 | 0.392 | 0.3128 | 0.6827 | 0.0431 | 0.1237 | 0.4609 | 0.4649 | 0.6663 | 0.0 | 0.7285 | 0.4913 | 0.4323 | 0.08 |
| 0.8339 | 9.0 | 4500 | 0.7188 | 0.3834 | 0.5328 | 0.4389 | 0.4271 | 0.0 | 0.4102 | 0.3461 | 0.7231 | 0.0449 | 0.1309 | 0.4861 | 0.4894 | 0.696 | 0.0 | 0.7721 | 0.5219 | 0.4577 | 0.1381 |
| 0.7813 | 10.0 | 5000 | 0.7378 | 0.3769 | 0.5485 | 0.4379 | 0.4384 | 0.0 | 0.3971 | 0.3526 | 0.6923 | 0.0362 | 0.124 | 0.4752 | 0.4803 | 0.6909 | 0.0 | 0.75 | 0.5162 | 0.4555 | 0.0833 |
| 0.7526 | 11.0 | 5500 | 0.6691 | 0.4059 | 0.5667 | 0.457 | 0.4777 | 0.0 | 0.4276 | 0.3719 | 0.7398 | 0.0528 | 0.1346 | 0.4873 | 0.4944 | 0.6956 | 0.0 | 0.7876 | 0.5378 | 0.4618 | 0.1514 |
| 0.7195 | 12.0 | 6000 | 0.6984 | 0.3983 | 0.5673 | 0.4621 | 0.4728 | 0.0 | 0.4143 | 0.3648 | 0.7222 | 0.0499 | 0.1274 | 0.4797 | 0.4874 | 0.6909 | 0.0 | 0.7712 | 0.5283 | 0.4558 | 0.1281 |
| 0.6467 | 13.0 | 6500 | 0.6682 | 0.408 | 0.5632 | 0.4939 | 0.5153 | 0.0 | 0.4315 | 0.388 | 0.7087 | 0.0458 | 0.1372 | 0.4872 | 0.49 | 0.7143 | 0.0 | 0.7558 | 0.5251 | 0.4654 | 0.1333 |
| 0.7253 | 14.0 | 7000 | 0.6210 | 0.4263 | 0.5778 | 0.5001 | 0.5356 | 0.0 | 0.4556 | 0.3959 | 0.7432 | 0.0782 | 0.1364 | 0.506 | 0.5088 | 0.7377 | 0.0 | 0.7888 | 0.5347 | 0.4813 | 0.139 |
| 0.7234 | 15.0 | 7500 | 0.6613 | 0.406 | 0.5657 | 0.486 | 0.5085 | 0.0 | 0.4334 | 0.3667 | 0.7096 | 0.0572 | 0.1337 | 0.4851 | 0.4885 | 0.7083 | 0.0 | 0.7573 | 0.5213 | 0.4478 | 0.1267 |
| 0.6467 | 16.0 | 8000 | 0.6621 | 0.4174 | 0.5704 | 0.4886 | 0.5214 | 0.0 | 0.4596 | 0.3702 | 0.7309 | 0.0454 | 0.1354 | 0.4926 | 0.496 | 0.7095 | 0.0 | 0.7785 | 0.539 | 0.4504 | 0.1133 |
| 0.6227 | 17.0 | 8500 | 0.6304 | 0.4221 | 0.5839 | 0.4954 | 0.5342 | 0.0 | 0.4436 | 0.3929 | 0.732 | 0.0783 | 0.1338 | 0.495 | 0.4977 | 0.7139 | 0.0 | 0.7791 | 0.5317 | 0.4721 | 0.1643 |
| 0.7302 | 18.0 | 9000 | 0.5794 | 0.4364 | 0.5848 | 0.5177 | 0.5726 | 0.0 | 0.4589 | 0.4131 | 0.7367 | 0.0589 | 0.1399 | 0.5078 | 0.5106 | 0.748 | 0.0 | 0.7836 | 0.5434 | 0.482 | 0.121 |
| 0.665 | 19.0 | 9500 | 0.5931 | 0.4435 | 0.6047 | 0.5339 | 0.5862 | 0.0 | 0.4622 | 0.4084 | 0.7442 | 0.0897 | 0.1405 | 0.5052 | 0.5099 | 0.7349 | 0.0 | 0.7948 | 0.5396 | 0.4776 | 0.1667 |
| 0.5947 | 20.0 | 10000 | 0.5701 | 0.4626 | 0.6084 | 0.5475 | 0.615 | 0.0 | 0.4907 | 0.4311 | 0.7728 | 0.0789 | 0.1413 | 0.5173 | 0.5248 | 0.7563 | 0.0 | 0.8182 | 0.5565 | 0.4942 | 0.2052 |
| 0.5727 | 21.0 | 10500 | 0.5720 | 0.4511 | 0.604 | 0.5269 | 0.5865 | 0.0 | 0.4784 | 0.4265 | 0.7667 | 0.1067 | 0.1369 | 0.5125 | 0.5177 | 0.7389 | 0.0 | 0.8142 | 0.5344 | 0.4946 | 0.229 |
| 0.5855 | 22.0 | 11000 | 0.5773 | 0.4519 | 0.6125 | 0.5447 | 0.5949 | 0.0 | 0.4783 | 0.4254 | 0.7608 | 0.1236 | 0.1383 | 0.5063 | 0.5118 | 0.7329 | 0.0 | 0.8024 | 0.5333 | 0.4872 | 0.2386 |
| 0.5441 | 23.0 | 11500 | 0.5694 | 0.4636 | 0.62 | 0.5595 | 0.6198 | 0.0 | 0.4837 | 0.4296 | 0.7709 | 0.0867 | 0.1445 | 0.509 | 0.5137 | 0.7313 | 0.0 | 0.8097 | 0.5346 | 0.4852 | 0.161 |
| 0.5504 | 24.0 | 12000 | 0.5569 | 0.4653 | 0.6191 | 0.5497 | 0.6305 | 0.0 | 0.4841 | 0.4365 | 0.7653 | 0.0995 | 0.1431 | 0.5149 | 0.5206 | 0.7556 | 0.0 | 0.8064 | 0.5405 | 0.4957 | 0.2229 |
| 0.5802 | 25.0 | 12500 | 0.5488 | 0.4621 | 0.6168 | 0.5455 | 0.6366 | 0.0 | 0.4952 | 0.431 | 0.7497 | 0.0932 | 0.1458 | 0.5118 | 0.516 | 0.7536 | 0.0 | 0.7945 | 0.541 | 0.491 | 0.1814 |
| 0.6644 | 26.0 | 13000 | 0.5489 | 0.4709 | 0.6259 | 0.564 | 0.637 | 0.0 | 0.5032 | 0.4382 | 0.7757 | 0.0979 | 0.1449 | 0.515 | 0.5187 | 0.7385 | 0.0 | 0.8176 | 0.5449 | 0.4914 | 0.2129 |
| 0.5006 | 27.0 | 13500 | 0.5375 | 0.4817 | 0.6348 | 0.5852 | 0.676 | 0.0008 | 0.5099 | 0.4509 | 0.7683 | 0.0954 | 0.1491 | 0.5201 | 0.5249 | 0.7623 | 0.0021 | 0.8103 | 0.5427 | 0.4986 | 0.19 |
| 0.5194 | 28.0 | 14000 | 0.5161 | 0.4872 | 0.6325 | 0.5795 | 0.6725 | 0.0015 | 0.5126 | 0.4629 | 0.7875 | 0.1579 | 0.1508 | 0.5289 | 0.5343 | 0.7655 | 0.0093 | 0.8282 | 0.5508 | 0.5098 | 0.2424 |
| 0.5253 | 29.0 | 14500 | 0.5392 | 0.4861 | 0.6461 | 0.5959 | 0.6739 | 0.0158 | 0.5018 | 0.4562 | 0.7685 | 0.1217 | 0.1511 | 0.5204 | 0.5248 | 0.7524 | 0.0144 | 0.8076 | 0.5371 | 0.5038 | 0.2138 |
| 0.7139 | 30.0 | 15000 | 0.5087 | 0.4933 | 0.6447 | 0.5839 | 0.6846 | 0.0082 | 0.5172 | 0.468 | 0.7873 | 0.0989 | 0.156 | 0.5313 | 0.5363 | 0.7667 | 0.0124 | 0.83 | 0.5527 | 0.5146 | 0.2552 |
| 0.5975 | 31.0 | 15500 | 0.5136 | 0.5044 | 0.6842 | 0.5915 | 0.6641 | 0.062 | 0.5448 | 0.4764 | 0.787 | 0.1557 | 0.1728 | 0.5681 | 0.575 | 0.7504 | 0.1433 | 0.8312 | 0.5871 | 0.5523 | 0.2838 |
| 0.6357 | 32.0 | 16000 | 0.5031 | 0.506 | 0.6647 | 0.5959 | 0.7083 | 0.0387 | 0.5347 | 0.4748 | 0.7711 | 0.1292 | 0.166 | 0.5445 | 0.5521 | 0.7778 | 0.067 | 0.8115 | 0.5721 | 0.5279 | 0.3205 |
| 0.4954 | 33.0 | 16500 | 0.4850 | 0.6026 | 0.7982 | 0.7113 | 0.7038 | 0.3072 | 0.5485 | 0.5987 | 0.7966 | 0.1613 | 0.2289 | 0.6542 | 0.6592 | 0.7687 | 0.3763 | 0.8327 | 0.601 | 0.6559 | 0.2952 |
| 0.5608 | 34.0 | 17000 | 0.4956 | 0.6291 | 0.8291 | 0.7295 | 0.6904 | 0.4001 | 0.6004 | 0.6177 | 0.7969 | 0.1447 | 0.2403 | 0.6767 | 0.6807 | 0.7512 | 0.4588 | 0.8321 | 0.6434 | 0.6715 | 0.2195 |
| 0.5545 | 35.0 | 17500 | 0.4593 | 0.6732 | 0.8781 | 0.7947 | 0.7125 | 0.5052 | 0.6504 | 0.6728 | 0.802 | 0.128 | 0.2712 | 0.719 | 0.7231 | 0.7679 | 0.566 | 0.8355 | 0.7029 | 0.7214 | 0.2586 |
| 0.4638 | 36.0 | 18000 | 0.4485 | 0.6864 | 0.9007 | 0.7957 | 0.7238 | 0.5375 | 0.683 | 0.6833 | 0.7978 | 0.1103 | 0.2766 | 0.7337 | 0.7394 | 0.7798 | 0.6041 | 0.8342 | 0.7294 | 0.7441 | 0.1986 |
| 0.4631 | 37.0 | 18500 | 0.4289 | 0.6983 | 0.8846 | 0.819 | 0.7339 | 0.5411 | 0.6778 | 0.6983 | 0.8198 | 0.1593 | 0.282 | 0.7377 | 0.7435 | 0.7877 | 0.5948 | 0.8479 | 0.7242 | 0.7472 | 0.26 |
| 0.4801 | 38.0 | 19000 | 0.4302 | 0.7033 | 0.9186 | 0.8231 | 0.7085 | 0.596 | 0.7351 | 0.6893 | 0.8056 | 0.208 | 0.2852 | 0.7465 | 0.7534 | 0.7635 | 0.6608 | 0.8358 | 0.7803 | 0.7432 | 0.2871 |
| 0.5169 | 39.0 | 19500 | 0.4603 | 0.6792 | 0.9211 | 0.8229 | 0.6854 | 0.5802 | 0.7224 | 0.6702 | 0.7719 | 0.0846 | 0.2782 | 0.7283 | 0.7346 | 0.746 | 0.6495 | 0.8082 | 0.7577 | 0.7324 | 0.191 |
| 0.5702 | 40.0 | 20000 | 0.4284 | 0.7044 | 0.9409 | 0.8336 | 0.7053 | 0.6151 | 0.7657 | 0.6982 | 0.7928 | 0.1289 | 0.2882 | 0.7526 | 0.7597 | 0.7603 | 0.6845 | 0.8342 | 0.8075 | 0.7525 | 0.2648 |
| 0.4602 | 41.0 | 20500 | 0.4185 | 0.7108 | 0.9349 | 0.8528 | 0.7103 | 0.6225 | 0.7698 | 0.7045 | 0.7996 | 0.1198 | 0.286 | 0.7573 | 0.7632 | 0.7627 | 0.6876 | 0.8394 | 0.8156 | 0.7583 | 0.209 |
| 0.5054 | 42.0 | 21000 | 0.4112 | 0.7046 | 0.9386 | 0.8376 | 0.7135 | 0.6025 | 0.7612 | 0.6897 | 0.7979 | 0.1176 | 0.285 | 0.7544 | 0.7605 | 0.7679 | 0.6784 | 0.8352 | 0.8061 | 0.7498 | 0.2319 |
| 0.4585 | 43.0 | 21500 | 0.4149 | 0.7019 | 0.9352 | 0.831 | 0.7039 | 0.5973 | 0.7528 | 0.6919 | 0.8043 | 0.1512 | 0.2842 | 0.746 | 0.7539 | 0.7587 | 0.6619 | 0.8412 | 0.8012 | 0.7437 | 0.2824 |
| 0.4809 | 44.0 | 22000 | 0.4257 | 0.7114 | 0.946 | 0.8499 | 0.7028 | 0.6311 | 0.7628 | 0.6952 | 0.8004 | 0.1798 | 0.2847 | 0.7502 | 0.7581 | 0.7452 | 0.6918 | 0.8373 | 0.8051 | 0.7436 | 0.2981 |
| 0.5096 | 45.0 | 22500 | 0.3866 | 0.7301 | 0.9409 | 0.8656 | 0.7337 | 0.6316 | 0.7604 | 0.723 | 0.825 | 0.2305 | 0.2934 | 0.768 | 0.7752 | 0.7853 | 0.6825 | 0.8579 | 0.7969 | 0.7703 | 0.3581 |
| 0.3569 | 46.0 | 23000 | 0.3903 | 0.7354 | 0.9551 | 0.85 | 0.7441 | 0.6476 | 0.7816 | 0.7281 | 0.8146 | 0.1643 | 0.2973 | 0.7783 | 0.7852 | 0.7913 | 0.7113 | 0.853 | 0.8215 | 0.78 | 0.2905 |
| 0.5786 | 47.0 | 23500 | 0.3864 | 0.7324 | 0.9466 | 0.8595 | 0.7353 | 0.6618 | 0.7822 | 0.717 | 0.8 | 0.118 | 0.2988 | 0.7731 | 0.779 | 0.7889 | 0.7124 | 0.8358 | 0.82 | 0.7693 | 0.2252 |
| 0.5832 | 48.0 | 24000 | 0.3837 | 0.7295 | 0.9548 | 0.8663 | 0.7363 | 0.6488 | 0.7473 | 0.7188 | 0.8036 | 0.2165 | 0.2953 | 0.7746 | 0.7835 | 0.7925 | 0.7144 | 0.8436 | 0.7958 | 0.775 | 0.3795 |
| 0.4607 | 49.0 | 24500 | 0.3718 | 0.7349 | 0.952 | 0.86 | 0.7436 | 0.653 | 0.7486 | 0.7263 | 0.8081 | 0.2217 | 0.2972 | 0.7798 | 0.7852 | 0.7929 | 0.7134 | 0.8494 | 0.7947 | 0.78 | 0.3295 |
| 0.4544 | 50.0 | 25000 | 0.3855 | 0.7337 | 0.9509 | 0.8708 | 0.7415 | 0.6595 | 0.7572 | 0.7273 | 0.8002 | 0.1915 | 0.2962 | 0.776 | 0.7831 | 0.7948 | 0.7155 | 0.8391 | 0.8071 | 0.7778 | 0.3548 |
| 0.4856 | 51.0 | 25500 | 0.3908 | 0.7289 | 0.948 | 0.8855 | 0.7357 | 0.6467 | 0.7705 | 0.7117 | 0.8042 | 0.2033 | 0.2988 | 0.7699 | 0.7754 | 0.7821 | 0.7082 | 0.8358 | 0.8117 | 0.7603 | 0.3552 |
| 0.525 | 52.0 | 26000 | 0.3737 | 0.7356 | 0.9475 | 0.8752 | 0.7445 | 0.661 | 0.7775 | 0.7244 | 0.8012 | 0.1072 | 0.298 | 0.7774 | 0.7847 | 0.7917 | 0.7268 | 0.8358 | 0.8254 | 0.7733 | 0.2576 |
| 0.461 | 53.0 | 26500 | 0.3872 | 0.73 | 0.9538 | 0.8836 | 0.7342 | 0.6596 | 0.7643 | 0.7287 | 0.796 | 0.1453 | 0.2963 | 0.7755 | 0.7815 | 0.779 | 0.7309 | 0.8345 | 0.8016 | 0.7774 | 0.2843 |
| 0.4168 | 54.0 | 27000 | 0.3672 | 0.7432 | 0.9508 | 0.8815 | 0.7403 | 0.6648 | 0.7746 | 0.7364 | 0.8247 | 0.1979 | 0.3004 | 0.7866 | 0.7924 | 0.7933 | 0.7237 | 0.8603 | 0.8117 | 0.7876 | 0.3281 |
| 0.5283 | 55.0 | 27500 | 0.3803 | 0.7312 | 0.9393 | 0.8797 | 0.73 | 0.6559 | 0.7731 | 0.7226 | 0.8077 | 0.2027 | 0.2998 | 0.7742 | 0.778 | 0.7802 | 0.7093 | 0.8445 | 0.81 | 0.7706 | 0.2871 |
| 0.4825 | 56.0 | 28000 | 0.3591 | 0.7475 | 0.9513 | 0.8948 | 0.7531 | 0.6794 | 0.7812 | 0.7373 | 0.8099 | 0.2344 | 0.304 | 0.7953 | 0.7992 | 0.8052 | 0.7412 | 0.8512 | 0.825 | 0.7872 | 0.361 |
| 0.4286 | 57.0 | 28500 | 0.3636 | 0.7375 | 0.9587 | 0.8681 | 0.7436 | 0.6484 | 0.7447 | 0.7316 | 0.8204 | 0.2267 | 0.2952 | 0.7811 | 0.7873 | 0.7917 | 0.7103 | 0.86 | 0.7927 | 0.7801 | 0.3533 |
| 0.505 | 58.0 | 29000 | 0.3713 | 0.7322 | 0.9479 | 0.8768 | 0.7316 | 0.6583 | 0.7717 | 0.7208 | 0.8067 | 0.24 | 0.2949 | 0.7722 | 0.7786 | 0.7782 | 0.7155 | 0.8421 | 0.8092 | 0.7716 | 0.32 |
| 0.3802 | 59.0 | 29500 | 0.3628 | 0.7445 | 0.9469 | 0.8799 | 0.7358 | 0.678 | 0.7525 | 0.7393 | 0.8196 | 0.2474 | 0.2973 | 0.782 | 0.7885 | 0.781 | 0.7278 | 0.8567 | 0.7951 | 0.784 | 0.32 |
| 0.3638 | 60.0 | 30000 | 0.3528 | 0.7432 | 0.9472 | 0.8783 | 0.7524 | 0.6658 | 0.7709 | 0.7413 | 0.8114 | 0.2418 | 0.2996 | 0.7839 | 0.7893 | 0.798 | 0.7175 | 0.8524 | 0.8165 | 0.7875 | 0.3619 |
| 0.4559 | 61.0 | 30500 | 0.3543 | 0.7393 | 0.9569 | 0.8867 | 0.7432 | 0.6716 | 0.7413 | 0.7374 | 0.8031 | 0.2875 | 0.2987 | 0.7806 | 0.7855 | 0.7917 | 0.7227 | 0.8421 | 0.7821 | 0.7842 | 0.3805 |
| 0.5254 | 62.0 | 31000 | 0.3775 | 0.7268 | 0.9492 | 0.8686 | 0.7326 | 0.6375 | 0.7406 | 0.7164 | 0.8103 | 0.2894 | 0.2896 | 0.7669 | 0.7745 | 0.7802 | 0.6938 | 0.8494 | 0.7812 | 0.7667 | 0.409 |
| 0.3529 | 63.0 | 31500 | 0.3562 | 0.7523 | 0.9533 | 0.8955 | 0.7437 | 0.6887 | 0.7747 | 0.7458 | 0.8246 | 0.2767 | 0.3047 | 0.793 | 0.7998 | 0.7948 | 0.7443 | 0.8603 | 0.8215 | 0.7922 | 0.4081 |
| 0.4234 | 64.0 | 32000 | 0.3625 | 0.7424 | 0.9439 | 0.8858 | 0.7355 | 0.6711 | 0.7568 | 0.7401 | 0.8207 | 0.2197 | 0.2969 | 0.7798 | 0.7861 | 0.7845 | 0.7175 | 0.8564 | 0.8095 | 0.7821 | 0.3205 |
| 0.4396 | 65.0 | 32500 | 0.3512 | 0.7614 | 0.9564 | 0.891 | 0.7586 | 0.7001 | 0.7852 | 0.7489 | 0.8255 | 0.2531 | 0.3058 | 0.8001 | 0.8069 | 0.8048 | 0.7557 | 0.8603 | 0.8285 | 0.7985 | 0.3833 |
| 0.4173 | 66.0 | 33000 | 0.3434 | 0.77 | 0.9558 | 0.8952 | 0.7579 | 0.7253 | 0.8013 | 0.7656 | 0.8268 | 0.2052 | 0.3066 | 0.8065 | 0.8138 | 0.8091 | 0.7691 | 0.8633 | 0.8334 | 0.8101 | 0.3562 |
| 0.4697 | 67.0 | 33500 | 0.3513 | 0.7545 | 0.9452 | 0.8796 | 0.7586 | 0.677 | 0.7904 | 0.7461 | 0.8279 | 0.2171 | 0.3026 | 0.7924 | 0.799 | 0.8087 | 0.7247 | 0.8636 | 0.8351 | 0.788 | 0.32 |
| 0.4771 | 68.0 | 34000 | 0.3578 | 0.7577 | 0.9455 | 0.8704 | 0.7557 | 0.6993 | 0.8022 | 0.7413 | 0.818 | 0.1788 | 0.3074 | 0.7947 | 0.8017 | 0.8004 | 0.7464 | 0.8582 | 0.8384 | 0.7888 | 0.329 |
| 0.4833 | 69.0 | 34500 | 0.3555 | 0.7502 | 0.9465 | 0.8737 | 0.7537 | 0.6766 | 0.7661 | 0.7317 | 0.8202 | 0.2681 | 0.2995 | 0.789 | 0.7955 | 0.7984 | 0.7299 | 0.8582 | 0.8117 | 0.7848 | 0.3676 |
| 0.4091 | 70.0 | 35000 | 0.3746 | 0.7332 | 0.9476 | 0.8716 | 0.7196 | 0.6808 | 0.7409 | 0.728 | 0.7992 | 0.1368 | 0.3026 | 0.7781 | 0.7825 | 0.7726 | 0.7351 | 0.8397 | 0.7993 | 0.7767 | 0.2071 |
| 0.3662 | 71.0 | 35500 | 0.3476 | 0.748 | 0.9477 | 0.8928 | 0.2295 | 0.7423 | 0.7833 | 0.3014 | 0.791 | 0.7964 | 0.3529 | 0.7918 | 0.8305 | 0.7374 | 0.7873 | 0.6861 | 0.7361 | 0.8204 | 0.8658 |
| 0.4244 | 72.0 | 36000 | 0.3509 | 0.7518 | 0.9432 | 0.8724 | 0.2416 | 0.7471 | 0.7649 | 0.2982 | 0.792 | 0.797 | 0.3657 | 0.7909 | 0.8121 | 0.7548 | 0.8004 | 0.6835 | 0.7299 | 0.8172 | 0.8606 |
| 0.4483 | 73.0 | 36500 | 0.3508 | 0.7472 | 0.9483 | 0.897 | 0.2306 | 0.753 | 0.7725 | 0.2979 | 0.7963 | 0.8027 | 0.409 | 0.8053 | 0.8304 | 0.738 | 0.7944 | 0.6835 | 0.7474 | 0.8201 | 0.8664 |
| 0.4498 | 74.0 | 37000 | 0.3357 | 0.7632 | 0.9535 | 0.882 | 0.2094 | 0.7633 | 0.7862 | 0.3069 | 0.8033 | 0.8087 | 0.3448 | 0.8104 | 0.8324 | 0.7617 | 0.8091 | 0.7007 | 0.7485 | 0.8272 | 0.8685 |
| 0.5208 | 75.0 | 37500 | 0.3492 | 0.7598 | 0.9506 | 0.8859 | 0.2379 | 0.7612 | 0.7963 | 0.3067 | 0.7977 | 0.8034 | 0.3476 | 0.8052 | 0.8349 | 0.7466 | 0.7917 | 0.6987 | 0.7454 | 0.834 | 0.873 |
| 0.3542 | 76.0 | 38000 | 0.3492 | 0.7606 | 0.9431 | 0.8889 | 0.2385 | 0.7543 | 0.7958 | 0.3072 | 0.7975 | 0.8028 | 0.3724 | 0.7957 | 0.8427 | 0.7548 | 0.7988 | 0.7124 | 0.7526 | 0.8146 | 0.857 |
| 0.439 | 77.0 | 38500 | 0.3485 | 0.7617 | 0.9583 | 0.8965 | 0.213 | 0.7633 | 0.7814 | 0.3039 | 0.7998 | 0.8063 | 0.3343 | 0.8052 | 0.8302 | 0.7599 | 0.8036 | 0.6995 | 0.7474 | 0.8257 | 0.8679 |
| 0.4294 | 78.0 | 39000 | 0.3406 | 0.7562 | 0.947 | 0.8774 | 0.2508 | 0.7586 | 0.7739 | 0.3044 | 0.7915 | 0.7994 | 0.3657 | 0.8001 | 0.8081 | 0.7478 | 0.7917 | 0.6802 | 0.7268 | 0.8405 | 0.8797 |
| 0.3643 | 79.0 | 39500 | 0.3285 | 0.7607 | 0.9492 | 0.8828 | 0.2242 | 0.7627 | 0.7663 | 0.3047 | 0.7998 | 0.8045 | 0.3529 | 0.8074 | 0.809 | 0.7602 | 0.804 | 0.6911 | 0.7402 | 0.8309 | 0.8694 |
| 0.3089 | 80.0 | 40000 | 0.3194 | 0.7734 | 0.9514 | 0.8911 | 0.2526 | 0.773 | 0.808 | 0.3087 | 0.8092 | 0.8163 | 0.3805 | 0.8168 | 0.8501 | 0.773 | 0.8175 | 0.7036 | 0.7515 | 0.8436 | 0.88 |
| 0.3825 | 81.0 | 40500 | 0.3217 | 0.7671 | 0.9532 | 0.8831 | 0.2602 | 0.7599 | 0.8157 | 0.3076 | 0.8079 | 0.8136 | 0.3638 | 0.8041 | 0.8615 | 0.7627 | 0.8147 | 0.7031 | 0.7557 | 0.8354 | 0.8703 |
| 0.465 | 82.0 | 41000 | 0.3319 | 0.7729 | 0.9571 | 0.8953 | 0.2869 | 0.7677 | 0.812 | 0.3083 | 0.8123 | 0.8173 | 0.3862 | 0.8111 | 0.8534 | 0.7579 | 0.8024 | 0.7203 | 0.7711 | 0.8404 | 0.8785 |
| 0.3699 | 83.0 | 41500 | 0.3355 | 0.7681 | 0.9404 | 0.8881 | 0.2056 | 0.7663 | 0.7947 | 0.3088 | 0.8062 | 0.8112 | 0.2671 | 0.809 | 0.8387 | 0.7788 | 0.8246 | 0.7036 | 0.7443 | 0.8218 | 0.8645 |
| 0.4712 | 84.0 | 42000 | 0.3503 | 0.7537 | 0.9542 | 0.8957 | 0.2904 | 0.7501 | 0.7776 | 0.3017 | 0.7948 | 0.8006 | 0.3829 | 0.7959 | 0.8247 | 0.7472 | 0.7984 | 0.6924 | 0.7423 | 0.8214 | 0.8612 |
| 0.3711 | 85.0 | 42500 | 0.3334 | 0.7686 | 0.9549 | 0.8986 | 0.2611 | 0.7664 | 0.8073 | 0.3066 | 0.8066 | 0.8126 | 0.3724 | 0.8067 | 0.8506 | 0.7423 | 0.7905 | 0.7212 | 0.768 | 0.8425 | 0.8794 |
| 0.4093 | 86.0 | 43000 | 0.3299 | 0.7711 | 0.9535 | 0.8948 | 0.2808 | 0.7678 | 0.8096 | 0.31 | 0.8091 | 0.8156 | 0.3848 | 0.8098 | 0.8489 | 0.7478 | 0.7996 | 0.7176 | 0.7629 | 0.8478 | 0.8842 |
| 0.447 | 87.0 | 43500 | 0.3274 | 0.7718 | 0.9547 | 0.8992 | 0.2794 | 0.7699 | 0.8004 | 0.3086 | 0.8129 | 0.8177 | 0.3738 | 0.8106 | 0.8445 | 0.7686 | 0.8151 | 0.7125 | 0.7639 | 0.8343 | 0.8742 |
| 0.3878 | 88.0 | 44000 | 0.3162 | 0.7836 | 0.9558 | 0.9025 | 0.2726 | 0.7785 | 0.8195 | 0.3138 | 0.8202 | 0.8262 | 0.3805 | 0.8201 | 0.8567 | 0.7758 | 0.8206 | 0.7307 | 0.7763 | 0.8442 | 0.8818 |
| 0.3293 | 89.0 | 44500 | 0.3279 | 0.7753 | 0.9585 | 0.8908 | 0.2607 | 0.7729 | 0.8023 | 0.3112 | 0.8129 | 0.8193 | 0.3748 | 0.8182 | 0.8371 | 0.76 | 0.8095 | 0.7257 | 0.7732 | 0.8403 | 0.8752 |
| 0.279 | 90.0 | 45000 | 0.3147 | 0.7774 | 0.9502 | 0.8862 | 0.2608 | 0.7753 | 0.8075 | 0.3091 | 0.8166 | 0.8217 | 0.351 | 0.8164 | 0.8512 | 0.7737 | 0.821 | 0.7143 | 0.7608 | 0.8442 | 0.8833 |
| 0.339 | 91.0 | 45500 | 0.3120 | 0.7779 | 0.9532 | 0.8949 | 0.2683 | 0.7732 | 0.8047 | 0.3094 | 0.8169 | 0.8225 | 0.3881 | 0.8181 | 0.8504 | 0.7784 | 0.8262 | 0.7125 | 0.7598 | 0.8428 | 0.8815 |
| 0.3912 | 92.0 | 46000 | 0.3251 | 0.7654 | 0.9549 | 0.9026 | 0.239 | 0.7613 | 0.7949 | 0.3052 | 0.8083 | 0.8145 | 0.4105 | 0.81 | 0.8352 | 0.7566 | 0.8115 | 0.7011 | 0.7536 | 0.8385 | 0.8785 |
| 0.3807 | 93.0 | 46500 | 0.3135 | 0.775 | 0.9623 | 0.8789 | 0.3063 | 0.7761 | 0.8012 | 0.3088 | 0.8154 | 0.822 | 0.4376 | 0.8208 | 0.845 | 0.7674 | 0.821 | 0.7131 | 0.7608 | 0.8444 | 0.8842 |
| 0.3656 | 94.0 | 47000 | 0.3086 | 0.7801 | 0.95 | 0.8789 | 0.2709 | 0.7843 | 0.8114 | 0.3144 | 0.8184 | 0.8227 | 0.3586 | 0.8282 | 0.8487 | 0.7752 | 0.8238 | 0.726 | 0.766 | 0.8391 | 0.8782 |
| 0.4247 | 95.0 | 47500 | 0.3114 | 0.7796 | 0.9586 | 0.8881 | 0.3308 | 0.7744 | 0.7972 | 0.3095 | 0.8172 | 0.8224 | 0.4505 | 0.8143 | 0.8408 | 0.7644 | 0.8135 | 0.7272 | 0.7701 | 0.8473 | 0.8836 |
| 0.4126 | 96.0 | 48000 | 0.3133 | 0.7738 | 0.9614 | 0.8988 | 0.3127 | 0.7708 | 0.8021 | 0.31 | 0.8124 | 0.8185 | 0.3962 | 0.8166 | 0.842 | 0.7608 | 0.8087 | 0.7137 | 0.7629 | 0.8468 | 0.8839 |
| 0.359 | 97.0 | 48500 | 0.3201 | 0.7733 | 0.953 | 0.906 | 0.3088 | 0.7727 | 0.8001 | 0.3107 | 0.81 | 0.8155 | 0.391 | 0.8137 | 0.8409 | 0.7506 | 0.7964 | 0.7168 | 0.7629 | 0.8526 | 0.8873 |
| 0.4638 | 98.0 | 49000 | 0.3107 | 0.782 | 0.9587 | 0.887 | 0.3189 | 0.783 | 0.8032 | 0.3128 | 0.8212 | 0.8258 | 0.3933 | 0.8258 | 0.8441 | 0.7726 | 0.8147 | 0.7132 | 0.767 | 0.8601 | 0.8958 |
| 0.3504 | 99.0 | 49500 | 0.3072 | 0.7808 | 0.9538 | 0.9073 | 0.3213 | 0.7827 | 0.8113 | 0.3134 | 0.8181 | 0.823 | 0.4033 | 0.8219 | 0.8499 | 0.7752 | 0.823 | 0.7103 | 0.7546 | 0.8571 | 0.8912 |
| 0.4122 | 100.0 | 50000 | 0.3071 | 0.7832 | 0.9591 | 0.9103 | 0.3515 | 0.7866 | 0.7982 | 0.3125 | 0.8203 | 0.8259 | 0.4233 | 0.8275 | 0.8408 | 0.7704 | 0.8179 | 0.7221 | 0.7691 | 0.8572 | 0.8909 |
| 0.4066 | 101.0 | 50500 | 0.3091 | 0.7845 | 0.9595 | 0.8987 | 0.3408 | 0.7879 | 0.8005 | 0.3126 | 0.82 | 0.8259 | 0.4086 | 0.826 | 0.8379 | 0.7781 | 0.823 | 0.7198 | 0.7649 | 0.8555 | 0.8897 |
| 0.3207 | 102.0 | 51000 | 0.3127 | 0.7783 | 0.9531 | 0.8992 | 0.3268 | 0.7782 | 0.8049 | 0.312 | 0.8201 | 0.8238 | 0.4262 | 0.8221 | 0.8476 | 0.7715 | 0.8198 | 0.7081 | 0.7588 | 0.8555 | 0.8927 |
| 0.3462 | 103.0 | 51500 | 0.3052 | 0.7911 | 0.957 | 0.9074 | 0.3095 | 0.7945 | 0.8058 | 0.3146 | 0.8252 | 0.83 | 0.41 | 0.8332 | 0.8423 | 0.7746 | 0.8194 | 0.7385 | 0.7763 | 0.8601 | 0.8942 |
| 0.3938 | 104.0 | 52000 | 0.2955 | 0.793 | 0.9638 | 0.9043 | 0.3318 | 0.7971 | 0.8019 | 0.3167 | 0.8306 | 0.8352 | 0.4348 | 0.839 | 0.8371 | 0.7777 | 0.825 | 0.7434 | 0.7876 | 0.858 | 0.893 |
| 0.3236 | 105.0 | 52500 | 0.2997 | 0.7909 | 0.9607 | 0.9079 | 0.3475 | 0.7935 | 0.8059 | 0.3149 | 0.8265 | 0.8323 | 0.4552 | 0.8326 | 0.8405 | 0.7767 | 0.8206 | 0.7373 | 0.7825 | 0.8588 | 0.8939 |
| 0.3559 | 106.0 | 53000 | 0.2987 | 0.7918 | 0.9599 | 0.9093 | 0.3328 | 0.8003 | 0.8102 | 0.3167 | 0.8294 | 0.8358 | 0.4367 | 0.8392 | 0.8494 | 0.7782 | 0.8278 | 0.7367 | 0.7856 | 0.8606 | 0.8939 |
| 0.39 | 107.0 | 53500 | 0.3079 | 0.7835 | 0.961 | 0.9054 | 0.3091 | 0.7855 | 0.7968 | 0.3115 | 0.8186 | 0.8253 | 0.409 | 0.8258 | 0.8377 | 0.7606 | 0.8079 | 0.7349 | 0.7784 | 0.8551 | 0.8897 |
| 0.362 | 108.0 | 54000 | 0.2972 | 0.7924 | 0.9578 | 0.905 | 0.3031 | 0.7963 | 0.8132 | 0.3145 | 0.8291 | 0.8352 | 0.4043 | 0.8369 | 0.8524 | 0.7803 | 0.8234 | 0.7422 | 0.7918 | 0.8548 | 0.8903 |
| 0.3628 | 109.0 | 54500 | 0.3163 | 0.7773 | 0.9624 | 0.9104 | 0.3353 | 0.7826 | 0.7896 | 0.3092 | 0.8181 | 0.8238 | 0.4433 | 0.8235 | 0.8397 | 0.7642 | 0.8131 | 0.7221 | 0.7732 | 0.8457 | 0.8852 |
| 0.3574 | 110.0 | 55000 | 0.3100 | 0.781 | 0.9617 | 0.9053 | 0.3376 | 0.7829 | 0.806 | 0.3118 | 0.82 | 0.8256 | 0.4252 | 0.8243 | 0.8469 | 0.7664 | 0.8147 | 0.7253 | 0.7753 | 0.8511 | 0.887 |
| 0.368 | 111.0 | 55500 | 0.2933 | 0.7928 | 0.9593 | 0.9002 | 0.3206 | 0.7956 | 0.8223 | 0.3186 | 0.8282 | 0.8351 | 0.4252 | 0.8349 | 0.8599 | 0.79 | 0.8365 | 0.7313 | 0.7753 | 0.857 | 0.8936 |
| 0.3394 | 112.0 | 56000 | 0.2973 | 0.792 | 0.9547 | 0.9061 | 0.3368 | 0.7922 | 0.8306 | 0.3146 | 0.827 | 0.8325 | 0.4029 | 0.83 | 0.8643 | 0.7841 | 0.8294 | 0.7361 | 0.7763 | 0.8559 | 0.8918 |
| 0.3677 | 113.0 | 56500 | 0.2919 | 0.7984 | 0.9604 | 0.9117 | 0.3672 | 0.7988 | 0.8247 | 0.3174 | 0.8343 | 0.8398 | 0.4433 | 0.837 | 0.8588 | 0.7855 | 0.8321 | 0.7524 | 0.7938 | 0.8574 | 0.8933 |
| 0.3681 | 114.0 | 57000 | 0.3044 | 0.7833 | 0.9586 | 0.8961 | 0.326 | 0.786 | 0.8076 | 0.3116 | 0.8207 | 0.8256 | 0.3905 | 0.8261 | 0.8483 | 0.7709 | 0.8179 | 0.7311 | 0.7732 | 0.848 | 0.8858 |
| 0.3562 | 115.0 | 57500 | 0.2904 | 0.7918 | 0.9632 | 0.8991 | 0.3676 | 0.793 | 0.8148 | 0.3128 | 0.8293 | 0.8339 | 0.4362 | 0.8345 | 0.8539 | 0.7843 | 0.827 | 0.7299 | 0.7804 | 0.8612 | 0.8942 |
| 0.3524 | 116.0 | 58000 | 0.2993 | 0.7868 | 0.9596 | 0.8935 | 0.3494 | 0.7868 | 0.8053 | 0.3096 | 0.8241 | 0.8305 | 0.4462 | 0.8306 | 0.8468 | 0.7799 | 0.8258 | 0.7199 | 0.7691 | 0.8604 | 0.8967 |
| 0.3553 | 117.0 | 58500 | 0.2957 | 0.7906 | 0.9596 | 0.8967 | 0.3272 | 0.7925 | 0.808 | 0.3134 | 0.8307 | 0.8368 | 0.4229 | 0.8383 | 0.8503 | 0.7817 | 0.8298 | 0.7306 | 0.7845 | 0.8596 | 0.8961 |
| 0.3976 | 118.0 | 59000 | 0.2960 | 0.7898 | 0.9594 | 0.8957 | 0.3654 | 0.7936 | 0.8076 | 0.3122 | 0.8276 | 0.8337 | 0.451 | 0.8373 | 0.8466 | 0.7777 | 0.8262 | 0.7295 | 0.7763 | 0.8623 | 0.8985 |
| 0.3359 | 119.0 | 59500 | 0.3019 | 0.787 | 0.9608 | 0.901 | 0.3603 | 0.7874 | 0.8063 | 0.3129 | 0.8247 | 0.83 | 0.4529 | 0.8295 | 0.8428 | 0.7713 | 0.8187 | 0.7276 | 0.7753 | 0.8622 | 0.8961 |
| 0.3539 | 120.0 | 60000 | 0.2955 | 0.791 | 0.9618 | 0.8983 | 0.346 | 0.7919 | 0.809 | 0.3134 | 0.828 | 0.8339 | 0.4557 | 0.8337 | 0.8467 | 0.7759 | 0.8234 | 0.7341 | 0.7804 | 0.8629 | 0.8979 |
| 0.3807 | 121.0 | 60500 | 0.2925 | 0.7959 | 0.9593 | 0.894 | 0.3381 | 0.7973 | 0.8155 | 0.3171 | 0.834 | 0.8391 | 0.4519 | 0.8399 | 0.8493 | 0.7827 | 0.8321 | 0.7384 | 0.7856 | 0.8665 | 0.8997 |
| 0.3657 | 122.0 | 61000 | 0.3006 | 0.7916 | 0.9634 | 0.8955 | 0.3356 | 0.7899 | 0.8187 | 0.3182 | 0.8276 | 0.8345 | 0.4352 | 0.834 | 0.8516 | 0.7768 | 0.8242 | 0.7377 | 0.7835 | 0.8604 | 0.8958 |
| 0.3388 | 123.0 | 61500 | 0.2985 | 0.7894 | 0.9631 | 0.8973 | 0.3342 | 0.7913 | 0.8084 | 0.3131 | 0.8275 | 0.833 | 0.44 | 0.8346 | 0.8481 | 0.7735 | 0.8206 | 0.7316 | 0.7814 | 0.8629 | 0.897 |
| 0.3234 | 124.0 | 62000 | 0.2949 | 0.7951 | 0.9601 | 0.8963 | 0.3344 | 0.7999 | 0.8059 | 0.3156 | 0.8322 | 0.8381 | 0.4452 | 0.8401 | 0.8458 | 0.7824 | 0.8306 | 0.7392 | 0.7845 | 0.8636 | 0.8991 |
| 0.341 | 125.0 | 62500 | 0.2918 | 0.7977 | 0.964 | 0.9012 | 0.3369 | 0.8003 | 0.82 | 0.3165 | 0.836 | 0.8414 | 0.4419 | 0.8434 | 0.8564 | 0.783 | 0.8294 | 0.7453 | 0.7948 | 0.8647 | 0.9 |
| 0.3289 | 126.0 | 63000 | 0.2900 | 0.8004 | 0.9627 | 0.904 | 0.3378 | 0.8034 | 0.823 | 0.3186 | 0.8403 | 0.8461 | 0.4452 | 0.8482 | 0.8596 | 0.7823 | 0.8329 | 0.7492 | 0.8031 | 0.8698 | 0.9021 |
| 0.3619 | 127.0 | 63500 | 0.2918 | 0.7985 | 0.9625 | 0.8978 | 0.3304 | 0.804 | 0.8171 | 0.3174 | 0.8368 | 0.8421 | 0.4114 | 0.8459 | 0.855 | 0.7839 | 0.8294 | 0.745 | 0.7969 | 0.8668 | 0.9 |
| 0.3725 | 128.0 | 64000 | 0.2907 | 0.7956 | 0.9634 | 0.8987 | 0.3324 | 0.8023 | 0.8105 | 0.3157 | 0.8345 | 0.8401 | 0.4267 | 0.8446 | 0.8508 | 0.7829 | 0.8266 | 0.7403 | 0.7948 | 0.8636 | 0.8988 |
| 0.3488 | 129.0 | 64500 | 0.2899 | 0.7982 | 0.9623 | 0.8974 | 0.3311 | 0.8049 | 0.8123 | 0.3171 | 0.8372 | 0.8432 | 0.4352 | 0.8481 | 0.8514 | 0.7821 | 0.8298 | 0.7478 | 0.8 | 0.8647 | 0.8997 |
| 0.2774 | 130.0 | 65000 | 0.2880 | 0.7985 | 0.9623 | 0.8972 | 0.3337 | 0.8056 | 0.8152 | 0.3174 | 0.8363 | 0.8424 | 0.4286 | 0.8472 | 0.8526 | 0.7842 | 0.8313 | 0.7457 | 0.7959 | 0.8656 | 0.9 |
| 0.3268 | 131.0 | 65500 | 0.2950 | 0.7966 | 0.9645 | 0.904 | 0.3272 | 0.8028 | 0.8135 | 0.3162 | 0.836 | 0.8422 | 0.4352 | 0.8471 | 0.8531 | 0.7806 | 0.8294 | 0.7465 | 0.799 | 0.8628 | 0.8982 |
| 0.327 | 132.0 | 66000 | 0.2854 | 0.8026 | 0.9658 | 0.8916 | 0.3174 | 0.8085 | 0.8186 | 0.3177 | 0.8391 | 0.845 | 0.4124 | 0.8497 | 0.8567 | 0.7927 | 0.8369 | 0.7447 | 0.7959 | 0.8704 | 0.9021 |
| 0.3712 | 133.0 | 66500 | 0.2902 | 0.8018 | 0.9658 | 0.8912 | 0.3277 | 0.8063 | 0.8175 | 0.3174 | 0.8391 | 0.8449 | 0.4352 | 0.8477 | 0.8584 | 0.7912 | 0.8365 | 0.7457 | 0.7969 | 0.8686 | 0.9012 |
| 0.3267 | 134.0 | 67000 | 0.2885 | 0.8009 | 0.964 | 0.8987 | 0.3322 | 0.8065 | 0.8133 | 0.3167 | 0.8384 | 0.8438 | 0.4252 | 0.8484 | 0.8523 | 0.7863 | 0.8321 | 0.7468 | 0.7969 | 0.8696 | 0.9024 |
| 0.4273 | 135.0 | 67500 | 0.2911 | 0.7979 | 0.9638 | 0.9028 | 0.3267 | 0.8033 | 0.8095 | 0.3175 | 0.8352 | 0.8406 | 0.4205 | 0.8448 | 0.8494 | 0.7828 | 0.8298 | 0.7428 | 0.7918 | 0.8681 | 0.9003 |
| 0.3564 | 136.0 | 68000 | 0.2915 | 0.797 | 0.9634 | 0.9013 | 0.3325 | 0.8023 | 0.818 | 0.3176 | 0.8349 | 0.8406 | 0.419 | 0.8449 | 0.8553 | 0.7825 | 0.8286 | 0.7411 | 0.7918 | 0.8674 | 0.9015 |
| 0.358 | 137.0 | 68500 | 0.2883 | 0.8007 | 0.9635 | 0.9034 | 0.3399 | 0.8054 | 0.8189 | 0.3187 | 0.8379 | 0.8434 | 0.4367 | 0.8469 | 0.8564 | 0.7868 | 0.8333 | 0.7455 | 0.7948 | 0.8698 | 0.9021 |
| 0.3715 | 138.0 | 69000 | 0.2868 | 0.7973 | 0.9632 | 0.9007 | 0.3366 | 0.8025 | 0.8125 | 0.3172 | 0.8358 | 0.8413 | 0.4286 | 0.8446 | 0.8521 | 0.7847 | 0.8317 | 0.7397 | 0.7907 | 0.8676 | 0.9015 |
| 0.4042 | 139.0 | 69500 | 0.2852 | 0.8022 | 0.9636 | 0.903 | 0.341 | 0.8065 | 0.8131 | 0.3187 | 0.84 | 0.8453 | 0.4381 | 0.849 | 0.853 | 0.7864 | 0.8333 | 0.7506 | 0.801 | 0.8695 | 0.9015 |
| 0.3881 | 140.0 | 70000 | 0.2871 | 0.8016 | 0.9632 | 0.8966 | 0.3441 | 0.8064 | 0.8174 | 0.3176 | 0.8384 | 0.8437 | 0.4348 | 0.8472 | 0.8561 | 0.7864 | 0.831 | 0.7496 | 0.799 | 0.8687 | 0.9012 |
| 0.3214 | 141.0 | 70500 | 0.2878 | 0.798 | 0.9632 | 0.8974 | 0.3421 | 0.8015 | 0.8145 | 0.316 | 0.8372 | 0.8423 | 0.4381 | 0.8449 | 0.8547 | 0.7844 | 0.8306 | 0.7425 | 0.7969 | 0.8671 | 0.8994 |
| 0.3357 | 142.0 | 71000 | 0.2879 | 0.7978 | 0.9639 | 0.8906 | 0.342 | 0.8026 | 0.8151 | 0.3163 | 0.8364 | 0.8416 | 0.4348 | 0.8452 | 0.8546 | 0.7829 | 0.8278 | 0.7456 | 0.7969 | 0.865 | 0.9 |
| 0.302 | 143.0 | 71500 | 0.2862 | 0.8007 | 0.9638 | 0.8928 | 0.3448 | 0.8046 | 0.8169 | 0.317 | 0.8381 | 0.8434 | 0.4348 | 0.8472 | 0.856 | 0.7869 | 0.831 | 0.7467 | 0.7979 | 0.8684 | 0.9012 |
| 0.3504 | 144.0 | 72000 | 0.2856 | 0.801 | 0.9638 | 0.8912 | 0.3394 | 0.8055 | 0.8199 | 0.3177 | 0.8392 | 0.8445 | 0.4381 | 0.8476 | 0.8576 | 0.7856 | 0.831 | 0.7486 | 0.801 | 0.8689 | 0.9015 |
| 0.3533 | 145.0 | 72500 | 0.2863 | 0.7992 | 0.9637 | 0.8912 | 0.3428 | 0.8055 | 0.8154 | 0.3162 | 0.8378 | 0.8431 | 0.4381 | 0.8467 | 0.8551 | 0.7858 | 0.831 | 0.7443 | 0.7969 | 0.8676 | 0.9015 |
| 0.3648 | 146.0 | 73000 | 0.2861 | 0.7987 | 0.9638 | 0.8913 | 0.3428 | 0.8047 | 0.8153 | 0.3162 | 0.8375 | 0.8427 | 0.4381 | 0.8463 | 0.855 | 0.7855 | 0.831 | 0.7435 | 0.7959 | 0.8672 | 0.9012 |
| 0.3381 | 147.0 | 73500 | 0.2867 | 0.7996 | 0.9637 | 0.899 | 0.3428 | 0.8053 | 0.8153 | 0.3163 | 0.8379 | 0.8431 | 0.4381 | 0.8464 | 0.8551 | 0.7846 | 0.8302 | 0.747 | 0.7979 | 0.8672 | 0.9012 |
| 0.3483 | 148.0 | 74000 | 0.2864 | 0.7995 | 0.9637 | 0.8989 | 0.3457 | 0.8055 | 0.8153 | 0.3165 | 0.8381 | 0.8433 | 0.4381 | 0.8467 | 0.8551 | 0.7833 | 0.8298 | 0.748 | 0.799 | 0.8672 | 0.9012 |
| 0.3674 | 149.0 | 74500 | 0.2864 | 0.7992 | 0.9637 | 0.8989 | 0.3428 | 0.8051 | 0.8153 | 0.3162 | 0.8378 | 0.843 | 0.4381 | 0.8463 | 0.8551 | 0.7833 | 0.8298 | 0.747 | 0.7979 | 0.8672 | 0.9012 |
| 0.3838 | 150.0 | 75000 | 0.2864 | 0.7992 | 0.9637 | 0.8989 | 0.3428 | 0.8051 | 0.8153 | 0.3162 | 0.8378 | 0.843 | 0.4381 | 0.8463 | 0.8551 | 0.7833 | 0.8298 | 0.747 | 0.7979 | 0.8672 | 0.9012 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 2.19.2
- Tokenizers 0.20.1
|
muhtasham/tajik-llama3-merged-4bit | muhtasham | 2024-10-27T22:02:22Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-27T22:00:46Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** muhtasham
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf | RichardErkhov | 2024-10-27T21:59:27Z | 8 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T19:37:51Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-1.5B-Instruct-Viet-SFT - GGUF
- Model creator: https://huggingface.co/jaeyong2/
- Original model: https://huggingface.co/jaeyong2/Qwen2.5-1.5B-Instruct-Viet-SFT/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q2_K.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_0.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_K.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_1.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_0.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_K.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_1.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q6_K.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-1.5B-Instruct-Viet-SFT.Q8_0.gguf](https://huggingface.co/RichardErkhov/jaeyong2_-_Qwen2.5-1.5B-Instruct-Viet-SFT-gguf/blob/main/Qwen2.5-1.5B-Instruct-Viet-SFT.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
library_name: transformers
language:
- vi
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AMead10/SuperNova-Medius-AWQ | AMead10 | 2024-10-27T21:57:49Z | 109 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"autoquant",
"awq",
"conversational",
"base_model:Qwen/Qwen2.5-14B",
"base_model:quantized:Qwen/Qwen2.5-14B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-10-27T21:55:28Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- autoquant
- awq
base_model:
- Qwen/Qwen2.5-14B
model-index:
- name: SuperNova-Medius
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.6
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 49.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 32.48
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.9
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.83
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
---
# Arcee-SuperNova-Medius
Arcee-SuperNova-Medius is a 14B parameter language model developed by Arcee.ai, built on the Qwen2.5-14B-Instruct architecture. This unique model is the result of a cross-architecture distillation pipeline, combining knowledge from both the Qwen2.5-72B-Instruct model and the Llama-3.1-405B-Instruct model. By leveraging the strengths of these two distinct architectures, SuperNova-Medius achieves high-quality instruction-following and complex reasoning capabilities in a mid-sized, resource-efficient form.
SuperNova-Medius is designed to excel in a variety of business use cases, including customer support, content creation, and technical assistance, while maintaining compatibility with smaller hardware configurations. It’s an ideal solution for organizations looking for advanced capabilities without the high resource requirements of larger models like our SuperNova-70B.
## Distillation Overview
The development of SuperNova-Medius involved a sophisticated multi-teacher, cross-architecture distillation process, with the following key steps:
1. **Logit Distillation from Llama 3.1 405B**:
- We distilled the logits of Llama 3.1 405B using an offline approach.
- The top K logits for each token were stored to capture most of the probability mass while managing storage requirements.
2. **Cross-Architecture Adaptation**:
- Using `mergekit-tokensurgeon`, we created a version of Qwen2.5-14B that uses the vocabulary of Llama 3.1 405B.
- This allowed for the use of Llama 3.1 405B logits in training the Qwen-based model.
3. **Distillation to Qwen Architecture**:
- The adapted Qwen2.5-14B model was trained using the stored 405B logits as the target.
4. **Parallel Qwen Distillation**:
- In a separate process, Qwen2-72B was distilled into a 14B model.
5. **Final Fusion and Fine-Tuning**:
- The Llama-distilled Qwen model's vocabulary was reverted to Qwen vocabulary.
- After re-aligning the vocabularies, a final fusion and fine-tuning step was conducted, using a specialized dataset from [EvolKit](https://github.com/arcee-ai/EvolKit) to ensure that SuperNova-Medius maintained coherence, fluency, and context understanding across a broad range of tasks.
## Performance Evaluation
Below are the benchmark results of SuperNova-Medius compared to similar models in its class:
| Model | Average | IFEval | BBH | GPQA | MMLU Pro | MuSR | Math Level 5 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Mistral-Small 2409 | 0.423 | 0.628 | 0.581 | 0.333 | 0.410 | 0.406 | 0.181 |
| Supernova-Lite | 0.427 | 0.786 | 0.511 | 0.306 | 0.388 | 0.415 | 0.155 |
| Qwen2.5-14B-Instruct | 0.450 | 0.827 | 0.623 | 0.358 | 0.490 | 0.403 | 0.000 |
| Supernova-Medius | **0.480** | **0.832** | **0.631** | **0.359** | **0.502** | **0.402** | **0.152** |
SuperNova-Medius performs exceptionally well in instruction-following (IFEval) and complex reasoning tasks (BBH), demonstrating its capability to handle a variety of real-world scenarios. It outperforms Qwen2.5-14B and SuperNova-Lite in multiple benchmarks, making it a powerful yet efficient choice for high-quality generative AI applications.
## Model Use Cases
Arcee-SuperNova-Medius is suitable for a range of applications, including:
- **Customer Support**: With its robust instruction-following and dialogue management capabilities, SuperNova-Medius can handle complex customer interactions, reducing the need for human intervention.
- **Content Creation**: The model’s advanced language understanding and generation abilities make it ideal for creating high-quality, coherent content across diverse domains.
- **Technical Assistance**: SuperNova-Medius has a deep reservoir of technical knowledge, making it an excellent assistant for programming, technical documentation, and other expert-level content creation.
## Deployment Options
SuperNova-Medius is available for use under the Apache-2.0 license. For those who need even higher performance, the full-size 70B SuperNova model can be accessed via an Arcee-hosted API or for local deployment. To learn more or explore deployment options, please reach out to [[email protected]](mailto:[email protected]).
## Technical Specifications
- **Model Architecture**: Qwen2.5-14B-Instruct
- **Distillation Sources**: Qwen2.5-72B-Instruct, Llama-3.1-405B-Instruct
- **Parameter Count**: 14 billion
- **Training Dataset**: Custom instruction dataset generated with [EvolKit](https://github.com/arcee-ai/EvolKit)
- **Distillation Technique**: Multi-architecture offline logit distillation with cross-architecture vocabulary alignment.
## Summary
Arcee-SuperNova-Medius provides a unique balance of power, efficiency, and versatility. By distilling knowledge from two top-performing teacher models into a single 14B parameter model, SuperNova-Medius achieves results that rival larger models while maintaining a compact size ideal for practical deployment. Whether for customer support, content creation, or technical assistance, SuperNova-Medius is the perfect choice for organizations looking to leverage advanced language model capabilities in a cost-effective and accessible form.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_arcee-ai__SuperNova-Medius)
| Metric |Value|
|-------------------|----:|
|Avg. |37.22|
|IFEval (0-Shot) |55.60|
|BBH (3-Shot) |49.30|
|MATH Lvl 5 (4-Shot)|32.48|
|GPQA (0-shot) |17.90|
|MuSR (0-shot) |19.19|
|MMLU-PRO (5-shot) |48.83|
|
JhonMR/Model_text_pros_fil_42_x_300_v8 | JhonMR | 2024-10-27T21:56:25Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T21:54:02Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Model_text_pros_fil_42_x_300_v8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_text_pros_fil_42_x_300_v8
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5691
- Accuracy@en: 0.8533
- F1@en: 0.8506
- Precision@en: 0.8562
- Recall@en: 0.8516
- Loss@en: 0.5691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy@en | F1@en | Precision@en | Recall@en | Loss@en |
|:-------------:|:-----:|:-----:|:---------------:|:-----------:|:------:|:------------:|:---------:|:-------:|
| 3.3162 | 1.0 | 591 | 2.8406 | 0.1784 | 0.1149 | 0.1118 | 0.1852 | 2.8406 |
| 2.6493 | 2.0 | 1182 | 2.4608 | 0.2365 | 0.1641 | 0.1653 | 0.2400 | 2.4608 |
| 2.3811 | 3.0 | 1773 | 2.2707 | 0.2848 | 0.2282 | 0.2763 | 0.2883 | 2.2707 |
| 2.1866 | 4.0 | 2364 | 2.0891 | 0.3473 | 0.2910 | 0.3112 | 0.3485 | 2.0891 |
| 1.9741 | 5.0 | 2955 | 1.8783 | 0.3924 | 0.3569 | 0.4295 | 0.4008 | 1.8783 |
| 1.7036 | 6.0 | 3546 | 1.5903 | 0.4883 | 0.4549 | 0.5013 | 0.4907 | 1.5903 |
| 1.4521 | 7.0 | 4137 | 1.3334 | 0.5841 | 0.5650 | 0.5963 | 0.5879 | 1.3334 |
| 1.2217 | 8.0 | 4728 | 1.1444 | 0.6235 | 0.6027 | 0.6190 | 0.6253 | 1.1444 |
| 1.0442 | 9.0 | 5319 | 1.0212 | 0.6724 | 0.6590 | 0.6810 | 0.6734 | 1.0212 |
| 0.9103 | 10.0 | 5910 | 0.8917 | 0.7025 | 0.6807 | 0.7305 | 0.7041 | 0.8917 |
| 0.8089 | 11.0 | 6501 | 0.8282 | 0.7330 | 0.7266 | 0.7428 | 0.7346 | 0.8282 |
| 0.7184 | 12.0 | 7092 | 0.7637 | 0.7727 | 0.7683 | 0.7818 | 0.7719 | 0.7637 |
| 0.6422 | 13.0 | 7683 | 0.6982 | 0.7956 | 0.7918 | 0.8035 | 0.7977 | 0.6982 |
| 0.5677 | 14.0 | 8274 | 0.6570 | 0.8187 | 0.8123 | 0.8249 | 0.8183 | 0.6570 |
| 0.5141 | 15.0 | 8865 | 0.6345 | 0.8263 | 0.8234 | 0.8296 | 0.8259 | 0.6345 |
| 0.4619 | 16.0 | 9456 | 0.6085 | 0.8378 | 0.8348 | 0.8439 | 0.8367 | 0.6085 |
| 0.425 | 17.0 | 10047 | 0.6040 | 0.8429 | 0.8404 | 0.8489 | 0.8415 | 0.6040 |
| 0.3936 | 18.0 | 10638 | 0.5984 | 0.8457 | 0.8441 | 0.8498 | 0.8444 | 0.5984 |
| 0.3673 | 19.0 | 11229 | 0.5792 | 0.8511 | 0.8481 | 0.8551 | 0.8500 | 0.5792 |
| 0.3467 | 20.0 | 11820 | 0.5862 | 0.8463 | 0.8435 | 0.8490 | 0.8450 | 0.5862 |
| 0.3292 | 21.0 | 12411 | 0.5765 | 0.8470 | 0.8448 | 0.8509 | 0.8458 | 0.5765 |
| 0.31 | 22.0 | 13002 | 0.5769 | 0.8470 | 0.8448 | 0.8490 | 0.8463 | 0.5769 |
| 0.2966 | 23.0 | 13593 | 0.5691 | 0.8533 | 0.8506 | 0.8562 | 0.8516 | 0.5691 |
| 0.2827 | 24.0 | 14184 | 0.5711 | 0.8543 | 0.8517 | 0.8563 | 0.8523 | 0.5711 |
| 0.2693 | 25.0 | 14775 | 0.5788 | 0.8546 | 0.8518 | 0.8555 | 0.8532 | 0.5788 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
kristiannordby/text-to-sql-v3 | kristiannordby | 2024-10-27T21:56:07Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-27T19:54:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf | RichardErkhov | 2024-10-27T21:50:14Z | 17 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T19:19:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B - GGUF
- Model creator: https://huggingface.co/dat-lequoc/
- Original model: https://huggingface.co/dat-lequoc/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q2_K.gguf) | Q2_K | 0.63GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K.gguf) | Q3_K | 0.77GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_0.gguf) | Q4_0 | 0.87GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_K.gguf) | Q4_K | 0.92GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q4_1.gguf) | Q4_1 | 0.95GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_0.gguf) | Q5_0 | 1.02GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_K.gguf) | Q5_K | 1.05GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q5_1.gguf) | Q5_1 | 1.1GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q6_K.gguf) | Q6_K | 1.19GB |
| [fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/dat-lequoc_-_fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B-gguf/blob/main/fast-apply-16bit-v0.v15-Qwen2.5-Coder-1.5B.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf | RichardErkhov | 2024-10-27T21:49:52Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T19:43:14Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distil-mistral-1.5B-v0.1 - GGUF
- Model creator: https://huggingface.co/sanchit-gandhi/
- Original model: https://huggingface.co/sanchit-gandhi/distil-mistral-1.5B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [distil-mistral-1.5B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q2_K.gguf) | Q2_K | 0.59GB |
| [distil-mistral-1.5B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.68GB |
| [distil-mistral-1.5B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q3_K.gguf) | Q3_K | 0.74GB |
| [distil-mistral-1.5B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.74GB |
| [distil-mistral-1.5B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.79GB |
| [distil-mistral-1.5B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.82GB |
| [distil-mistral-1.5B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q4_0.gguf) | Q4_0 | 0.86GB |
| [distil-mistral-1.5B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.86GB |
| [distil-mistral-1.5B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.86GB |
| [distil-mistral-1.5B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q4_K.gguf) | Q4_K | 0.89GB |
| [distil-mistral-1.5B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.89GB |
| [distil-mistral-1.5B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q4_1.gguf) | Q4_1 | 0.94GB |
| [distil-mistral-1.5B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q5_0.gguf) | Q5_0 | 1.02GB |
| [distil-mistral-1.5B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [distil-mistral-1.5B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q5_K.gguf) | Q5_K | 1.04GB |
| [distil-mistral-1.5B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.04GB |
| [distil-mistral-1.5B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q5_1.gguf) | Q5_1 | 1.11GB |
| [distil-mistral-1.5B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q6_K.gguf) | Q6_K | 1.2GB |
| [distil-mistral-1.5B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/sanchit-gandhi_-_distil-mistral-1.5B-v0.1-gguf/blob/main/distil-mistral-1.5B-v0.1.Q8_0.gguf) | Q8_0 | 1.56GB |
Original model description:
---
datasets:
- HuggingFaceTB/cosmopedia
library_name: transformers
---
To reproduce this run:
```bash
accelerate launch --multi_gpu --mixed_precision=fp16 --num_processes=8 run_distillation.py config_mistral.yaml
```
|
RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf | RichardErkhov | 2024-10-27T21:41:51Z | 43 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T19:35:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen-2.5-1.5b-finetuned-for-sql-generation - GGUF
- Model creator: https://huggingface.co/abdulmannan-01/
- Original model: https://huggingface.co/abdulmannan-01/qwen-2.5-1.5b-finetuned-for-sql-generation/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q2_K.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q2_K.gguf) | Q2_K | 0.63GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K.gguf) | Q3_K | 0.77GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_0.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_0.gguf) | Q4_0 | 0.87GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_K.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_K.gguf) | Q4_K | 0.92GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_1.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q4_1.gguf) | Q4_1 | 0.95GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_0.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_0.gguf) | Q5_0 | 1.02GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_K.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_K.gguf) | Q5_K | 1.05GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_1.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q5_1.gguf) | Q5_1 | 1.1GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q6_K.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q6_K.gguf) | Q6_K | 1.19GB |
| [qwen-2.5-1.5b-finetuned-for-sql-generation.Q8_0.gguf](https://huggingface.co/RichardErkhov/abdulmannan-01_-_qwen-2.5-1.5b-finetuned-for-sql-generation-gguf/blob/main/qwen-2.5-1.5b-finetuned-for-sql-generation.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
library_name: transformers
tags: []
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 QWen2.5 1.5B transformers model finetuend to generate valid sql.
- **Developed by:** Abdul Mannan
- **Finetuned from model:** Qwen/Qwen2.5-1.5B-Instruct
|
pyterrier-quality/mqt5-small | pyterrier-quality | 2024-10-27T21:39:27Z | 9 | 0 | null | [
"safetensors",
"mt5",
"arxiv:2407.12170",
"region:us"
] | null | 2024-10-27T21:31:47Z | For use with the [`pyterrier-quality`](https://github.com/terrierteam/pyterrier-quality) package.
A version of mt5-small trained as a passage quality estimation model using the approach described in [this paper](https://arxiv.org/pdf/2407.12170), over the following datasets:
msmarco-passage, mmarco/de, mmarco/es, mmarco/fr, mmarco/id, mmarco/it, mmarco/pt, mmarco/ru, mmarco/v2/ar, mmarco/v2/de, mmarco/v2/dt, mmarco/v2/es, mmarco/v2/fr, mmarco/v2/hi, mmarco/v2/id, mmarco/v2/it, mmarco/v2/ja, mmarco/v2/pt, mmarco/v2/ru, mmarco/v2/vi, mmarco/v2/zh, mmarco/zh, neumarco/fa, neumarco/ru, neumarco/zh
```python
>>> from pyterrier_quality import QualT5
>>> qt5 = QualT5('pyterrier-quality/mqt5-small')
>>> qt5([
... {'docno': '0', 'text': 'bla bla bla'},
... {'docno': '0', 'text': 'The presence of communication amid scientific minds was equally important to the success of the Manhattan Project as scientific intellect was. The only cloud hanging over the impressive achievement of the atomic researchers and engineers is what their success truly meant; hundreds of thousands of innocent lives obliterated.'},
... ])
docno text quality
0 0 bla bla bla -1.406250
1 0 The presence of communication amid scientific ... -0.828125
>>> # A larger quality score means higher quality
``` |
jcbthnflrs/llama381binstruct_summarize_short_merged | jcbthnflrs | 2024-10-27T21:35:27Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"summarization",
"legal-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | summarization | 2024-10-27T21:09:02Z | ---
library_name: transformers
tags:
- trl
- sft
- summarization
- legal-ai
---
# Model Card for Legal Document Summarizer
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned to convert legal documents into human-readable summaries using Llama 3 8B Instruct as the base model. It was trained using QLoRA/LoRA techniques for efficient fine-tuning.
## Model Details
### Model Description
This is a fine-tuned version of NousResearch/Meta-Llama-3-8B-Instruct, optimized for summarizing legal documents in plain English. The model uses Parameter-Efficient Fine-Tuning (PEFT) methods, specifically LoRA, to achieve performance comparable to full fine-tuning while using significantly fewer computational resources.
- **Developed by:** jcbthnflrs
- **Model type:** Causal Language Model (LLaMA 3 Architecture)
- **Language(s):** English
- **License:** [Base model license applies]
- **Finetuned from model:** NousResearch/Meta-Llama-3-8B-Instruct
### Model Sources
- **Base Model:** [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
- **Training Code:** Based on LLM Engineering Challenge from AI Makerspace
## Uses
### Direct Use
This model is designed for converting legal documents, terms of service, and other legal content into plain English summaries that are easier for general audiences to understand. It can be used directly through the Hugging Face API or interface.
### Downstream Use
The model can be integrated into:
- Legal document processing systems
- Terms of service simplification tools
- Contract analysis applications
- Legal document management systems
### Out-of-Scope Use
The model should not be used as a replacement for legal advice or professional legal interpretation. It is meant to assist in understanding legal documents but not to provide legal guidance.
## Training Details
### Training Data
The model was trained on the Plain English Summary of Contracts dataset, which contains pairs of legal documents (EULA, TOS, etc.) and their natural language summaries. The dataset was split into:
- Training set: 68 examples
- Test set: 9 examples
- Validation set: 8 examples
### Training Procedure
#### Preprocessing
- Input text is formatted using a specific template following Llama 3's chat format
- Special tokens are used to mark legal document boundaries
- Maximum sequence length: 2048 tokens
#### Training Hyperparameters
- **Training regime:** 4-bit quantization using QLoRA
- **Optimizer:** AdamW
- **Learning rate:** 2e-4
- **Batch size:** 1 per device
- **Training steps:** 500
- **Warmup steps:** 30
- **Evaluation steps:** 25
- **Learning rate scheduler:** Linear
- **LoRA rank (r):** 16
- **LoRA alpha:** 32
- **LoRA dropout:** 0.1
### Hardware and Software
#### Hardware Requirements
- GPU: T4 or better
- Memory: Optimized for consumer-level resources through QLoRA
#### Software Requirements
- transformers library
- PEFT library
- bitsandbytes for quantization
- TRL for supervised fine-tuning
## Evaluation
Training metrics show:
- Starting training loss: ~1.52
- Final training loss: ~0.0006
- Final validation loss: ~2.74
## Model Card Authors
@jcbthnflrs
## Model Card Contact
https://x.com/jcbthnflrs |
GitBag/rloo_6_lr_2e-7_555134_1730042202 | GitBag | 2024-10-27T21:25:41Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T21:20:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mtrazzak/smollm360m-arch | mtrazzak | 2024-10-27T21:22:39Z | 146 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T21:16:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JacobLinCool/MP-SENet-VB | JacobLinCool | 2024-10-27T21:22:32Z | 45 | 0 | null | [
"safetensors",
"arxiv:2308.08926",
"audio",
"denoising",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"speech",
"speech-enhancement",
"audio-to-audio",
"license:mit",
"region:us"
] | audio-to-audio | 2024-10-27T21:19:32Z | ---
license: mit
pipeline_tag: audio-to-audio
tags:
- arxiv:2308.08926
- audio
- denoising
- model_hub_mixin
- pytorch_model_hub_mixin
- speech
- speech-enhancement
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/yxlu-0102/MP-SENet
- Docs: [More Information Needed] |
g-assismoraes/mdeberta-semeval25_maxf1_fold5 | g-assismoraes | 2024-10-27T21:22:28Z | 197 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T21:18:41Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_maxf1_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_maxf1_fold5
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.7990
- Precision Samples: 0.1660
- Recall Samples: 0.4739
- F1 Samples: 0.2271
- Precision Macro: 0.8737
- Recall Macro: 0.3063
- F1 Macro: 0.2305
- Precision Micro: 0.1452
- Recall Micro: 0.3694
- F1 Micro: 0.2085
- Precision Weighted: 0.6156
- Recall Weighted: 0.3694
- F1 Weighted: 0.1150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.6336 | 1.0 | 19 | 9.9912 | 1.0 | 0.0 | 0.0 | 1.0 | 0.2 | 0.2 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 9.4458 | 2.0 | 38 | 9.6839 | 0.1655 | 0.2456 | 0.1864 | 0.9814 | 0.2222 | 0.2062 | 0.1655 | 0.1381 | 0.1506 | 0.8858 | 0.1381 | 0.0406 |
| 9.4682 | 3.0 | 57 | 9.4970 | 0.1563 | 0.2984 | 0.1882 | 0.9642 | 0.2349 | 0.2108 | 0.1536 | 0.1772 | 0.1646 | 0.8400 | 0.1772 | 0.0523 |
| 9.0448 | 4.0 | 76 | 9.3362 | 0.1425 | 0.3520 | 0.1881 | 0.9422 | 0.2569 | 0.2152 | 0.1377 | 0.2312 | 0.1726 | 0.7853 | 0.2312 | 0.0641 |
| 9.0807 | 5.0 | 95 | 9.1579 | 0.1457 | 0.3938 | 0.1992 | 0.9336 | 0.2692 | 0.2188 | 0.1407 | 0.2763 | 0.1864 | 0.7446 | 0.2763 | 0.0831 |
| 8.6731 | 6.0 | 114 | 9.0240 | 0.1615 | 0.4392 | 0.2203 | 0.8931 | 0.2830 | 0.2251 | 0.1454 | 0.3153 | 0.1991 | 0.6493 | 0.3153 | 0.0996 |
| 8.9953 | 7.0 | 133 | 8.9087 | 0.1693 | 0.4769 | 0.2311 | 0.8866 | 0.3012 | 0.2322 | 0.1509 | 0.3634 | 0.2132 | 0.6350 | 0.3634 | 0.1190 |
| 9.1116 | 8.0 | 152 | 8.8515 | 0.1689 | 0.4669 | 0.2294 | 0.8861 | 0.3010 | 0.2315 | 0.1482 | 0.3544 | 0.2090 | 0.6334 | 0.3544 | 0.1167 |
| 8.5738 | 9.0 | 171 | 8.8080 | 0.1672 | 0.4879 | 0.2303 | 0.8754 | 0.3143 | 0.2330 | 0.1471 | 0.3904 | 0.2136 | 0.6191 | 0.3904 | 0.1203 |
| 9.1037 | 10.0 | 190 | 8.7990 | 0.1660 | 0.4739 | 0.2271 | 0.8737 | 0.3063 | 0.2305 | 0.1452 | 0.3694 | 0.2085 | 0.6156 | 0.3694 | 0.1150 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
GitBag/rloo_5_lr_2e-7_555134_1730031306 | GitBag | 2024-10-27T21:20:32Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T21:15:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GitBag/rloo_1_2_h_lr_2e-7_555134_1730036742 | GitBag | 2024-10-27T21:15:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T21:09:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf | RichardErkhov | 2024-10-27T21:12:21Z | 21 | 0 | null | [
"gguf",
"arxiv:2410.07002",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T19:03:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CursorCore-Yi-1.5B-LC - GGUF
- Model creator: https://huggingface.co/TechxGenus/
- Original model: https://huggingface.co/TechxGenus/CursorCore-Yi-1.5B-LC/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CursorCore-Yi-1.5B-LC.Q2_K.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q2_K.gguf) | Q2_K | 0.59GB |
| [CursorCore-Yi-1.5B-LC.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q3_K_S.gguf) | Q3_K_S | 0.67GB |
| [CursorCore-Yi-1.5B-LC.Q3_K.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q3_K.gguf) | Q3_K | 0.73GB |
| [CursorCore-Yi-1.5B-LC.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q3_K_M.gguf) | Q3_K_M | 0.73GB |
| [CursorCore-Yi-1.5B-LC.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q3_K_L.gguf) | Q3_K_L | 0.77GB |
| [CursorCore-Yi-1.5B-LC.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.IQ4_XS.gguf) | IQ4_XS | 0.78GB |
| [CursorCore-Yi-1.5B-LC.Q4_0.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q4_0.gguf) | Q4_0 | 0.81GB |
| [CursorCore-Yi-1.5B-LC.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.IQ4_NL.gguf) | IQ4_NL | 0.81GB |
| [CursorCore-Yi-1.5B-LC.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q4_K_S.gguf) | Q4_K_S | 0.84GB |
| [CursorCore-Yi-1.5B-LC.Q4_K.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q4_K.gguf) | Q4_K | 0.9GB |
| [CursorCore-Yi-1.5B-LC.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q4_K_M.gguf) | Q4_K_M | 0.9GB |
| [CursorCore-Yi-1.5B-LC.Q4_1.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q4_1.gguf) | Q4_1 | 0.89GB |
| [CursorCore-Yi-1.5B-LC.Q5_0.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q5_0.gguf) | Q5_0 | 0.96GB |
| [CursorCore-Yi-1.5B-LC.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q5_K_S.gguf) | Q5_K_S | 0.98GB |
| [CursorCore-Yi-1.5B-LC.Q5_K.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q5_K.gguf) | Q5_K | 1.02GB |
| [CursorCore-Yi-1.5B-LC.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q5_K_M.gguf) | Q5_K_M | 1.02GB |
| [CursorCore-Yi-1.5B-LC.Q5_1.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q5_1.gguf) | Q5_1 | 1.04GB |
| [CursorCore-Yi-1.5B-LC.Q6_K.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q6_K.gguf) | Q6_K | 1.19GB |
| [CursorCore-Yi-1.5B-LC.Q8_0.gguf](https://huggingface.co/RichardErkhov/TechxGenus_-_CursorCore-Yi-1.5B-LC-gguf/blob/main/CursorCore-Yi-1.5B-LC.Q8_0.gguf) | Q8_0 | 1.46GB |
Original model description:
---
tags:
- code
base_model:
- 01-ai/Yi-Coder-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
g-assismoraes/mdeberta-semeval25_maxf1_fold2 | g-assismoraes | 2024-10-27T21:09:45Z | 164 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T21:05:31Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_maxf1_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_maxf1_fold2
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.8694
- Precision Samples: 0.1448
- Recall Samples: 0.4710
- F1 Samples: 0.2054
- Precision Macro: 0.8790
- Recall Macro: 0.3070
- F1 Macro: 0.2198
- Precision Micro: 0.1280
- Recall Micro: 0.3545
- F1 Micro: 0.1881
- Precision Weighted: 0.6566
- Recall Weighted: 0.3545
- F1 Weighted: 0.1024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.355 | 1.0 | 19 | 9.8646 | 0.4690 | 0.1586 | 0.1586 | 0.9914 | 0.1974 | 0.1928 | 0.23 | 0.0697 | 0.1070 | 0.9300 | 0.0697 | 0.0322 |
| 10.0042 | 2.0 | 38 | 9.5711 | 0.1517 | 0.2663 | 0.1814 | 0.9710 | 0.2144 | 0.1962 | 0.1493 | 0.1515 | 0.1504 | 0.8541 | 0.1515 | 0.0453 |
| 9.8111 | 3.0 | 57 | 9.4447 | 0.1190 | 0.31 | 0.1625 | 0.9497 | 0.2248 | 0.1978 | 0.1202 | 0.1818 | 0.1448 | 0.7968 | 0.1818 | 0.0506 |
| 9.5882 | 4.0 | 76 | 9.3361 | 0.1149 | 0.3593 | 0.1645 | 0.9292 | 0.2427 | 0.2010 | 0.1124 | 0.2333 | 0.1517 | 0.7463 | 0.2333 | 0.0586 |
| 9.2717 | 5.0 | 95 | 9.2287 | 0.1179 | 0.3825 | 0.1684 | 0.8992 | 0.2584 | 0.2061 | 0.1135 | 0.2667 | 0.1593 | 0.6898 | 0.2667 | 0.0697 |
| 9.4865 | 6.0 | 114 | 9.1175 | 0.1358 | 0.4366 | 0.1948 | 0.9025 | 0.2876 | 0.2179 | 0.1253 | 0.3182 | 0.1798 | 0.6965 | 0.3182 | 0.0869 |
| 9.2006 | 7.0 | 133 | 8.9906 | 0.1428 | 0.4627 | 0.2029 | 0.8891 | 0.3008 | 0.2184 | 0.1275 | 0.3455 | 0.1863 | 0.6810 | 0.3455 | 0.0981 |
| 9.0527 | 8.0 | 152 | 8.9299 | 0.1434 | 0.4696 | 0.2040 | 0.8874 | 0.3057 | 0.2181 | 0.1271 | 0.3515 | 0.1866 | 0.6767 | 0.3515 | 0.0977 |
| 9.3313 | 9.0 | 171 | 8.8794 | 0.1450 | 0.4727 | 0.2053 | 0.8803 | 0.3092 | 0.2203 | 0.1280 | 0.3576 | 0.1885 | 0.6602 | 0.3576 | 0.1037 |
| 8.4989 | 10.0 | 190 | 8.8694 | 0.1448 | 0.4710 | 0.2054 | 0.8790 | 0.3070 | 0.2198 | 0.1280 | 0.3545 | 0.1881 | 0.6566 | 0.3545 | 0.1024 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
DouglasBraga/swin-tiny-patch4-window7-224-finetuned-leukemia.v2.0 | DouglasBraga | 2024-10-27T21:05:59Z | 214 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-10-14T19:36:03Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-leukemia.v2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-leukemia.v2.0
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5694
- Accuracy: 0.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.4734 | 0.9984 | 312 | 0.7528 | 0.5968 |
| 0.3596 | 2.0 | 625 | 0.8091 | 0.688 |
| 0.2991 | 2.9984 | 937 | 0.9220 | 0.6335 |
| 0.2658 | 4.0 | 1250 | 0.7774 | 0.7137 |
| 0.2511 | 4.9984 | 1562 | 0.4364 | 0.8267 |
| 0.2218 | 6.0 | 1875 | 0.6225 | 0.7837 |
| 0.1691 | 6.9984 | 2187 | 0.3587 | 0.8718 |
| 0.1721 | 8.0 | 2500 | 0.6494 | 0.7987 |
| 0.1393 | 8.9984 | 2812 | 0.6802 | 0.818 |
| 0.1109 | 10.0 | 3125 | 0.5511 | 0.834 |
| 0.1213 | 10.9984 | 3437 | 0.5982 | 0.8417 |
| 0.0971 | 12.0 | 3750 | 0.8005 | 0.814 |
| 0.1121 | 12.9984 | 4062 | 0.6397 | 0.8407 |
| 0.0947 | 14.0 | 4375 | 1.0869 | 0.768 |
| 0.1022 | 14.9984 | 4687 | 0.5969 | 0.8515 |
| 0.0801 | 16.0 | 5000 | 0.5839 | 0.8732 |
| 0.0951 | 16.9984 | 5312 | 0.8599 | 0.827 |
| 0.0716 | 18.0 | 5625 | 0.8355 | 0.822 |
| 0.0859 | 18.9984 | 5937 | 0.7547 | 0.8427 |
| 0.0661 | 20.0 | 6250 | 0.7206 | 0.851 |
| 0.0543 | 20.9984 | 6562 | 0.8396 | 0.8363 |
| 0.0646 | 22.0 | 6875 | 0.5467 | 0.881 |
| 0.0563 | 22.9984 | 7187 | 0.5694 | 0.8855 |
| 0.042 | 24.0 | 7500 | 0.8404 | 0.8492 |
| 0.0638 | 24.9984 | 7812 | 0.9300 | 0.84 |
| 0.0455 | 26.0 | 8125 | 0.9865 | 0.8393 |
| 0.037 | 26.9984 | 8437 | 0.8503 | 0.8525 |
| 0.0469 | 28.0 | 8750 | 0.8272 | 0.8602 |
| 0.0409 | 28.9984 | 9062 | 0.8988 | 0.8502 |
| 0.0438 | 29.9520 | 9360 | 0.8338 | 0.858 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_narratives09_maxf1_fold5 | g-assismoraes | 2024-10-27T20:59:37Z | 161 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T20:55:33Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_maxf1_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_maxf1_fold5
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0204
- Precision Samples: 0.3603
- Recall Samples: 0.7663
- F1 Samples: 0.4556
- Precision Macro: 0.6906
- Recall Macro: 0.5586
- F1 Macro: 0.3769
- Precision Micro: 0.3165
- Recall Micro: 0.7293
- F1 Micro: 0.4414
- Precision Weighted: 0.4601
- Recall Weighted: 0.7293
- F1 Weighted: 0.3993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.5606 | 1.0 | 19 | 5.1744 | 1.0 | 0.0 | 0.0 | 1.0 | 0.1429 | 0.1429 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 4.8514 | 2.0 | 38 | 4.9269 | 0.2759 | 0.2532 | 0.2276 | 0.9377 | 0.2238 | 0.1873 | 0.2880 | 0.2068 | 0.2407 | 0.8409 | 0.2068 | 0.1109 |
| 5.1079 | 3.0 | 57 | 4.6308 | 0.3793 | 0.4853 | 0.3604 | 0.8762 | 0.3242 | 0.2396 | 0.3420 | 0.4474 | 0.3876 | 0.6961 | 0.4474 | 0.2402 |
| 4.5129 | 4.0 | 76 | 4.4135 | 0.3422 | 0.6197 | 0.4125 | 0.7822 | 0.4150 | 0.2908 | 0.3175 | 0.5789 | 0.4101 | 0.5507 | 0.5789 | 0.3086 |
| 4.3874 | 5.0 | 95 | 4.2916 | 0.3576 | 0.6623 | 0.4341 | 0.7168 | 0.4431 | 0.3203 | 0.3265 | 0.6015 | 0.4233 | 0.4756 | 0.6015 | 0.3449 |
| 4.0833 | 6.0 | 114 | 4.1434 | 0.3378 | 0.7416 | 0.4323 | 0.7113 | 0.5131 | 0.3405 | 0.2992 | 0.7030 | 0.4198 | 0.4708 | 0.7030 | 0.3689 |
| 3.9936 | 7.0 | 133 | 4.0974 | 0.3532 | 0.7462 | 0.4496 | 0.6927 | 0.5341 | 0.3701 | 0.3160 | 0.7068 | 0.4367 | 0.4609 | 0.7068 | 0.3929 |
| 3.9677 | 8.0 | 152 | 4.0596 | 0.3606 | 0.7537 | 0.4543 | 0.6921 | 0.5484 | 0.3768 | 0.3193 | 0.7105 | 0.4406 | 0.4618 | 0.7105 | 0.3981 |
| 4.0104 | 9.0 | 171 | 4.0379 | 0.3547 | 0.7571 | 0.4524 | 0.6964 | 0.5523 | 0.3803 | 0.3177 | 0.7143 | 0.4398 | 0.4641 | 0.7143 | 0.3998 |
| 3.9613 | 10.0 | 190 | 4.0204 | 0.3603 | 0.7663 | 0.4556 | 0.6906 | 0.5586 | 0.3769 | 0.3165 | 0.7293 | 0.4414 | 0.4601 | 0.7293 | 0.3993 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_narratives09_maxf1_fold4 | g-assismoraes | 2024-10-27T20:55:28Z | 161 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T20:51:12Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_maxf1_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_maxf1_fold4
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7723
- Precision Samples: 0.3728
- Recall Samples: 0.7825
- F1 Samples: 0.4666
- Precision Macro: 0.6810
- Recall Macro: 0.4981
- F1 Macro: 0.2753
- Precision Micro: 0.3085
- Recall Micro: 0.7647
- F1 Micro: 0.4397
- Precision Weighted: 0.4751
- Recall Weighted: 0.7647
- F1 Weighted: 0.3999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.7927 | 1.0 | 19 | 4.9875 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0476 | 0.0476 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 5.0899 | 2.0 | 38 | 4.7740 | 0.3023 | 0.3386 | 0.2905 | 0.8797 | 0.1700 | 0.1306 | 0.316 | 0.3098 | 0.3129 | 0.7069 | 0.3098 | 0.2068 |
| 5.1834 | 3.0 | 57 | 4.4517 | 0.3345 | 0.4776 | 0.3732 | 0.8493 | 0.2311 | 0.1457 | 0.3314 | 0.4471 | 0.3806 | 0.6524 | 0.4471 | 0.2368 |
| 4.8195 | 4.0 | 76 | 4.2678 | 0.3568 | 0.6033 | 0.4120 | 0.7813 | 0.3360 | 0.2022 | 0.2962 | 0.5843 | 0.3931 | 0.5651 | 0.5843 | 0.3175 |
| 4.6183 | 5.0 | 95 | 4.0323 | 0.3872 | 0.6493 | 0.4394 | 0.7313 | 0.3521 | 0.2083 | 0.3204 | 0.6157 | 0.4215 | 0.5136 | 0.6157 | 0.3340 |
| 4.4332 | 6.0 | 114 | 3.9321 | 0.3921 | 0.7197 | 0.4615 | 0.7159 | 0.4256 | 0.2492 | 0.3092 | 0.7020 | 0.4293 | 0.4982 | 0.7020 | 0.3797 |
| 4.0992 | 7.0 | 133 | 3.8524 | 0.3728 | 0.7641 | 0.4640 | 0.6877 | 0.4789 | 0.2773 | 0.3147 | 0.7490 | 0.4432 | 0.4802 | 0.7490 | 0.4020 |
| 4.1885 | 8.0 | 152 | 3.7985 | 0.3751 | 0.7932 | 0.4751 | 0.6821 | 0.4933 | 0.2773 | 0.3176 | 0.7647 | 0.4488 | 0.4788 | 0.7647 | 0.4065 |
| 4.3678 | 9.0 | 171 | 3.7859 | 0.3739 | 0.7825 | 0.4678 | 0.6821 | 0.4981 | 0.2766 | 0.3105 | 0.7647 | 0.4417 | 0.4760 | 0.7647 | 0.4010 |
| 3.9512 | 10.0 | 190 | 3.7723 | 0.3728 | 0.7825 | 0.4666 | 0.6810 | 0.4981 | 0.2753 | 0.3085 | 0.7647 | 0.4397 | 0.4751 | 0.7647 | 0.3999 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
Vikhrmodels/salt-116k | Vikhrmodels | 2024-10-27T20:53:55Z | 138 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:finetune:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T22:29:49Z | ---
library_name: transformers
license: apache-2.0
language:
- en
base_model:
- TinyLlama/TinyLlama_v1.1
---
# Vikhr Salt: Speech And Language Transformer

Vikhr Salt is a multimodal model based on a pre-trained large language model, extended with new audio tokens to handle both TTS (text-to-speech) and ASR (automatic speech recognition) tasks. The model incorporates two variants for encoding audio—Encodec and SpeechTokenizer—and achieves stable training by fine-tuning precision settings. This approach allows Vikhr Salt to leverage pre-existing LLM knowledge while effectively generating and understanding speech, marking a step forward in multimodal learning.
## Model Authors
Ksenya Sycheva, Konstantin Korolev, Aleksandr Nikolic
|
RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf | RichardErkhov | 2024-10-27T20:52:30Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T18:30:28Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-Coder-1.5B-CodeFIM - GGUF
- Model creator: https://huggingface.co/Etherll/
- Original model: https://huggingface.co/Etherll/Qwen2.5-Coder-1.5B-CodeFIM/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q2_K.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q3_K.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q4_0.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q4_K.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q4_1.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q5_0.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q5_K.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q5_1.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q6_K.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-Coder-1.5B-CodeFIM.Q8_0.gguf](https://huggingface.co/RichardErkhov/Etherll_-_Qwen2.5-Coder-1.5B-CodeFIM-gguf/blob/main/Qwen2.5-Coder-1.5B-CodeFIM.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
library_name: transformers
tags: []
---
A small finetune over <https://huggingface.co/datasets/Etherll/code-fim-v2> dataset on top of Qwen/Qwen2.5-Coder-1.5B to generate code FIM ( Fill-in-the-Middle )
You can use this with [Continue](https://docs.continue.dev/autocomplete/how-to-use-it).
Dont forget to use this format :
```
<|file_name|>{{{filename}}}<|fim_prefix|>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>
```
|
oma7777/llama3.18B-Fine-tunedByOmar4BITMERGD | oma7777 | 2024-10-27T20:52:02Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-27T20:49:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf | RichardErkhov | 2024-10-27T20:50:43Z | 303 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T18:32:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-Coder-1.5B-Instruct-abliterated - GGUF
- Model creator: https://huggingface.co/huihui-ai/
- Original model: https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q2_K.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 0.7GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K.gguf) | Q3_K | 0.86GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 0.86GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 0.91GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 0.96GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_0.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_0.gguf) | Q4_0 | 0.99GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.IQ4_NL.gguf) | IQ4_NL | 1.0GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 1.0GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_K.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_K.gguf) | Q4_K | 1.04GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 1.04GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_1.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q4_1.gguf) | Q4_1 | 1.08GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_0.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_0.gguf) | Q5_0 | 1.17GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 1.17GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_K.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_K.gguf) | Q5_K | 1.2GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 1.2GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_1.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q5_1.gguf) | Q5_1 | 1.26GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q6_K.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 1.36GB |
| [Qwen2.5-Coder-1.5B-Instruct-abliterated.Q8_0.gguf](https://huggingface.co/RichardErkhov/huihui-ai_-_Qwen2.5-Coder-1.5B-Instruct-abliterated-gguf/blob/main/Qwen2.5-Coder-1.5B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 1.76GB |
Original model description:
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated
This is an uncensored version of [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
## Evaluations
The following data has been re-evaluated and calculated as the average for each test.
| Benchmark | Qwen2.5-Coder-1.5B-Instruct | Qwen2.5-Coder-1.5B-Instruct-abliterated |
|-------------|-----------------------------|-----------------------------------------|
| IF_Eval | 43.43 | **45.41** |
| MMLU Pro | 21.5 | 20.57 |
| TruthfulQA | 46.07 | 41.9 |
| BBH | 36.67 | 36.09 |
| GPQA | 28.00 | 26.13 |
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/blob/main/eval.sh)
|
RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf | RichardErkhov | 2024-10-27T20:46:26Z | 13 | 0 | null | [
"gguf",
"arxiv:2409.12122",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T18:18:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-Math-1.5B - GGUF
- Model creator: https://huggingface.co/unsloth/
- Original model: https://huggingface.co/unsloth/Qwen2.5-Math-1.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-Math-1.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2.5-Math-1.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2.5-Math-1.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2.5-Math-1.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2.5-Math-1.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2.5-Math-1.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2.5-Math-1.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2.5-Math-1.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2.5-Math-1.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2.5-Math-1.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2.5-Math-1.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2.5-Math-1.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2.5-Math-1.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2.5-Math-1.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2.5-Math-1.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2.5-Math-1.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2.5-Math-1.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2.5-Math-1.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2.5-Math-1.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2.5-Math-1.5B-gguf/blob/main/Qwen2.5-Math-1.5B.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
base_model: Qwen/Qwen2.5-Math-1.5B
language:
- en
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-Math-1.5B
> [!Warning]
> <div align="center">
> <b>
> 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.
> </b>
> </div>
## Introduction
In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.

While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.
## Model Details
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math).
## Requirements
* `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended.
> [!Warning]
> <div align="center">
> <b>
> 🚨 This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>.
> </b>
> </div>
For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Quick Start
> [!Important]
>
> **Qwen2.5-Math-1.5B-Instruct** is an instruction model for chatting;
>
> **Qwen2.5-Math-1.5B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning.
## Citation
If you find our work helpful, feel free to give us a citation.
```
@article{yang2024qwen25mathtechnicalreportmathematical,
title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement},
author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
journal={arXiv preprint arXiv:2409.12122},
year={2024}
}
```
|
maxg73872/distilbert-base-uncased-finetuned-emotion | maxg73872 | 2024-10-27T20:46:24Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T20:34:55Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2142
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8226 | 1.0 | 250 | 0.3151 | 0.912 | 0.9113 |
| 0.2438 | 2.0 | 500 | 0.2142 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
g-assismoraes/mdeberta-semeval25_narratives09_maxf1_fold1 | g-assismoraes | 2024-10-27T20:42:41Z | 196 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T20:38:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_maxf1_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_maxf1_fold1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1823
- Precision Samples: 0.3293
- Recall Samples: 0.7877
- F1 Samples: 0.4321
- Precision Macro: 0.6282
- Recall Macro: 0.4990
- F1 Macro: 0.2691
- Precision Micro: 0.2951
- Recall Micro: 0.7770
- F1 Micro: 0.4277
- Precision Weighted: 0.4074
- Recall Weighted: 0.7770
- F1 Weighted: 0.3902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.4291 | 1.0 | 19 | 5.3247 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0476 | 0.0476 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 5.1004 | 2.0 | 38 | 4.9863 | 0.5080 | 0.4156 | 0.2894 | 0.8696 | 0.2093 | 0.1349 | 0.3231 | 0.4173 | 0.3642 | 0.6722 | 0.4173 | 0.2245 |
| 4.7777 | 3.0 | 57 | 4.7335 | 0.3225 | 0.5811 | 0.3756 | 0.7986 | 0.3022 | 0.1740 | 0.2955 | 0.5432 | 0.3828 | 0.5753 | 0.5432 | 0.2733 |
| 4.4316 | 4.0 | 76 | 4.5100 | 0.3245 | 0.6957 | 0.4148 | 0.7110 | 0.3811 | 0.2277 | 0.3086 | 0.6439 | 0.4172 | 0.4911 | 0.6439 | 0.3400 |
| 4.214 | 5.0 | 95 | 4.3951 | 0.3194 | 0.7370 | 0.4185 | 0.7042 | 0.4555 | 0.2447 | 0.3047 | 0.7158 | 0.4275 | 0.4850 | 0.7158 | 0.3580 |
| 4.251 | 6.0 | 114 | 4.3089 | 0.2970 | 0.7886 | 0.4064 | 0.6606 | 0.4865 | 0.2541 | 0.2859 | 0.7662 | 0.4164 | 0.4231 | 0.7662 | 0.3698 |
| 3.9884 | 7.0 | 133 | 4.2633 | 0.3254 | 0.7903 | 0.4304 | 0.6159 | 0.4902 | 0.2575 | 0.2950 | 0.7662 | 0.426 | 0.3979 | 0.7662 | 0.3812 |
| 3.9453 | 8.0 | 152 | 4.2122 | 0.3237 | 0.7900 | 0.4299 | 0.6410 | 0.4994 | 0.2724 | 0.2963 | 0.7770 | 0.4290 | 0.4167 | 0.7770 | 0.3925 |
| 4.0275 | 9.0 | 171 | 4.1888 | 0.3203 | 0.7863 | 0.4247 | 0.6272 | 0.4964 | 0.2675 | 0.2921 | 0.7734 | 0.4241 | 0.4054 | 0.7734 | 0.3872 |
| 4.1566 | 10.0 | 190 | 4.1823 | 0.3293 | 0.7877 | 0.4321 | 0.6282 | 0.4990 | 0.2691 | 0.2951 | 0.7770 | 0.4277 | 0.4074 | 0.7770 | 0.3902 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf | RichardErkhov | 2024-10-27T20:39:30Z | 8 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T18:23:26Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5_1.5b_4000ocr_600kosmos - GGUF
- Model creator: https://huggingface.co/abelsr1710/
- Original model: https://huggingface.co/abelsr1710/qwen2.5_1.5b_4000ocr_600kosmos/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q2_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q2_K.gguf) | Q2_K | 0.63GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q3_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q3_K.gguf) | Q3_K | 0.77GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q4_0.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q4_0.gguf) | Q4_0 | 0.87GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q4_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q4_K.gguf) | Q4_K | 0.92GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q4_1.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q4_1.gguf) | Q4_1 | 0.95GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q5_0.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q5_0.gguf) | Q5_0 | 1.02GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q5_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q5_K.gguf) | Q5_K | 1.05GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q5_1.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q5_1.gguf) | Q5_1 | 1.1GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q6_K.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q6_K.gguf) | Q6_K | 1.19GB |
| [qwen2.5_1.5b_4000ocr_600kosmos.Q8_0.gguf](https://huggingface.co/RichardErkhov/abelsr1710_-_qwen2.5_1.5b_4000ocr_600kosmos-gguf/blob/main/qwen2.5_1.5b_4000ocr_600kosmos.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** abelsr1710
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
waldie/UnslopSmall-22B-v1-6.5bpw-h6-exl2 | waldie | 2024-10-27T20:38:10Z | 16 | 0 | null | [
"safetensors",
"mistral",
"base_model:TheDrummer/UnslopSmall-22B-v1",
"base_model:quantized:TheDrummer/UnslopSmall-22B-v1",
"exl2",
"region:us"
] | null | 2024-10-27T20:05:45Z | ---
base_model: TheDrummer/UnslopSmall-22B-v1
quantized_by: waldie
--- |
Sergim/classify-real-estate-pics | Sergim | 2024-10-27T20:37:47Z | 7 | 1 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | 2024-10-27T20:36:34Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: classify-real-estate-pics
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8550724387168884
---
# classify-real-estate-pics
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
Parisa-Moosavinezhad/my-model-name | Parisa-Moosavinezhad | 2024-10-27T20:36:37Z | 190 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T20:35:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adriszmar/QAIMath-Qwen2.5-7B-TIES | adriszmar | 2024-10-27T20:34:52Z | 7 | 0 | null | [
"safetensors",
"qwen2",
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen2.5-Math-7B",
"Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-10-27T20:30:59Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Qwen/Qwen2.5-Math-7B
- Qwen/Qwen2.5-Math-7B-Instruct
---
# QAIMath-Qwen2.5-7B-TIES
QAIMath-Qwen2.5-7B-TIES is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
* [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct)
## 🧩 Configuration
```yaml
models:
- model: Qwen/Qwen2.5-Math-7B
parameters:
density: 0.5
weight: 0.4
- model: Qwen/Qwen2.5-Math-7B-Instruct
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: Qwen/Qwen2.5-7B
parameters:
normalize: true
dtype: float16
``` |
stablecog-hf-1/FLUX.1-schnell-8bit-text-encoder-2 | stablecog-hf-1 | 2024-10-27T20:33:37Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-10-27T20:29:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0 | EVA-UNIT-01 | 2024-10-27T20:21:06Z | 1,091 | 26 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Nopm/Opus_WritingStruct",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"dataset:allura-org/Celeste-1.x-data-mixture",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-23T02:36:49Z | ---
library_name: transformers
license: apache-2.0
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
base_model: Qwen/Qwen2.5-32B
tags:
- generated_from_trainer
model-index:
- name: EVA-Qwen2.5-32B-SFFT-v0.0
results: []
---
# EVA Qwen2.5-32B v0.0
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>Model is available for inference on <a href=https://featherless.ai/models/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0>Featherless.AI</a></p
<p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 1</li>
<li>Typical-P: 0.9</li>
<li>Min-P: 0.05</li>
<li>Top-A: 0.2</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
</p>
<p>Model was trained by Kearm and Auri.</p>
<h4>Special thanks:</h4><ul>
<li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
<li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li>
<li>and to Allura-org for support and feedback on EVA models.</li></ul>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-32B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl
type: sharegpt
- path: datasets/Celeste_Filtered.jsonl
type: sharegpt
- path: datasets/Gryphe-S3-5-Charcards-names-2k.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Gryphe-4o-WP-1k.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.0
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 64
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# input_layernorm layers
- model.layers.0.input_layernorm
- model.layers.1.input_layernorm
- model.layers.2.input_layernorm
- model.layers.3.input_layernorm
- model.layers.4.input_layernorm
- model.layers.5.input_layernorm
- model.layers.6.input_layernorm
- model.layers.7.input_layernorm
- model.layers.8.input_layernorm
- model.layers.9.input_layernorm
- model.layers.10.input_layernorm
- model.layers.11.input_layernorm
- model.layers.12.input_layernorm
- model.layers.13.input_layernorm
- model.layers.14.input_layernorm
- model.layers.15.input_layernorm
- model.layers.16.input_layernorm
- model.layers.17.input_layernorm
- model.layers.18.input_layernorm
- model.layers.19.input_layernorm
- model.layers.20.input_layernorm
- model.layers.21.input_layernorm
- model.layers.22.input_layernorm
- model.layers.23.input_layernorm
- model.layers.24.input_layernorm
- model.layers.25.input_layernorm
- model.layers.26.input_layernorm
- model.layers.27.input_layernorm
- model.layers.28.input_layernorm
- model.layers.29.input_layernorm
- model.layers.30.input_layernorm
- model.layers.31.input_layernorm
# lm_head layers
# mlp.down_proj layers
- model.layers.63.mlp.down_proj
- model.layers.49.mlp.down_proj
- model.layers.48.mlp.down_proj
- model.layers.45.mlp.down_proj
- model.layers.44.mlp.down_proj
- model.layers.47.mlp.down_proj
- model.layers.46.mlp.down_proj
- model.layers.43.mlp.down_proj
- model.layers.8.mlp.down_proj
- model.layers.11.mlp.down_proj
- model.layers.19.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.20.mlp.down_proj
- model.layers.52.mlp.down_proj
- model.layers.39.mlp.down_proj
- model.layers.62.mlp.down_proj
- model.layers.50.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.53.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.7.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.12.mlp.down_proj
- model.layers.18.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.38.mlp.down_proj
- model.layers.14.mlp.down_proj
- model.layers.13.mlp.down_proj
# mlp.gate_proj layers
- model.layers.43.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.60.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.28.mlp.gate_proj
- model.layers.29.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.37.mlp.gate_proj
- model.layers.35.mlp.gate_proj
- model.layers.59.mlp.gate_proj
- model.layers.36.mlp.gate_proj
- model.layers.30.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.38.mlp.gate_proj
- model.layers.27.mlp.gate_proj
- model.layers.31.mlp.gate_proj
- model.layers.39.mlp.gate_proj
- model.layers.34.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.33.mlp.gate_proj
- model.layers.26.mlp.gate_proj
- model.layers.32.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.55.mlp.gate_proj
# mlp.up_proj layers
- model.layers.61.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.59.mlp.up_proj
- model.layers.58.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.28.mlp.up_proj
- model.layers.35.mlp.up_proj
- model.layers.36.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.34.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.29.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.30.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.27.mlp.up_proj
- model.layers.51.mlp.up_proj
- model.layers.52.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.26.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.50.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.39.mlp.up_proj
# model.embed_tokens layers
# model.norm layers
# post_attention_layernorm layers
- model.layers.0.post_attention_layernorm
- model.layers.1.post_attention_layernorm
- model.layers.2.post_attention_layernorm
- model.layers.3.post_attention_layernorm
- model.layers.4.post_attention_layernorm
- model.layers.5.post_attention_layernorm
- model.layers.6.post_attention_layernorm
- model.layers.7.post_attention_layernorm
- model.layers.8.post_attention_layernorm
- model.layers.9.post_attention_layernorm
- model.layers.10.post_attention_layernorm
- model.layers.11.post_attention_layernorm
- model.layers.12.post_attention_layernorm
- model.layers.13.post_attention_layernorm
- model.layers.14.post_attention_layernorm
- model.layers.15.post_attention_layernorm
- model.layers.16.post_attention_layernorm
- model.layers.17.post_attention_layernorm
- model.layers.18.post_attention_layernorm
- model.layers.19.post_attention_layernorm
- model.layers.20.post_attention_layernorm
- model.layers.21.post_attention_layernorm
- model.layers.22.post_attention_layernorm
- model.layers.23.post_attention_layernorm
- model.layers.24.post_attention_layernorm
- model.layers.25.post_attention_layernorm
- model.layers.26.post_attention_layernorm
- model.layers.27.post_attention_layernorm
- model.layers.28.post_attention_layernorm
- model.layers.29.post_attention_layernorm
- model.layers.30.post_attention_layernorm
- model.layers.31.post_attention_layernorm
# self_attn.k_proj layers
- model.layers.63.self_attn.k_proj
- model.layers.55.self_attn.k_proj
- model.layers.60.self_attn.k_proj
- model.layers.7.self_attn.k_proj
- model.layers.12.self_attn.k_proj
- model.layers.13.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.14.self_attn.k_proj
- model.layers.51.self_attn.k_proj
- model.layers.53.self_attn.k_proj
- model.layers.54.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.61.self_attn.k_proj
- model.layers.18.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.9.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.58.self_attn.k_proj
- model.layers.56.self_attn.k_proj
- model.layers.15.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.8.self_attn.k_proj
- model.layers.59.self_attn.k_proj
- model.layers.11.self_attn.k_proj
- model.layers.48.self_attn.k_proj
- model.layers.16.self_attn.k_proj
- model.layers.50.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.15.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.31.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.34.self_attn.o_proj
- model.layers.33.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.36.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.54.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.9.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.45.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.35.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.50.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.56.self_attn.q_proj
- model.layers.58.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.44.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.41.self_attn.q_proj
- model.layers.36.self_attn.q_proj
- model.layers.39.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.43.self_attn.q_proj
- model.layers.34.self_attn.q_proj
- model.layers.46.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.40.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.51.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.37.self_attn.q_proj
- model.layers.53.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.55.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.47.self_attn.v_proj
- model.layers.45.self_attn.v_proj
- model.layers.49.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.7.self_attn.v_proj
- model.layers.44.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.51.self_attn.v_proj
- model.layers.50.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.54.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.43.self_attn.v_proj
- model.layers.10.self_attn.v_proj
- model.layers.46.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.22.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.40.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.9.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.5.self_attn.v_proj
wandb_project: EVA-Qwen2.5-32B-SFFT-v0.0
wandb_entity:
wandb_watch:
wandb_name: Unit-00
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00003
max_grad_norm: 3
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 2
save_safetensors: true
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: false # Changed from true
# fsdp_use_orig_params: true # Changed from false
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: true # Added
# fsdp_backward_prefetch: "BACKWARD_POST" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
</details><br>
|
nlpguy/amdchess-v4 | nlpguy | 2024-10-27T20:18:08Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:amd/AMD-Llama-135m",
"base_model:finetune:amd/AMD-Llama-135m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T19:41:09Z | ---
library_name: transformers
license: apache-2.0
base_model: amd/AMD-Llama-135m
tags:
- generated_from_trainer
model-index:
- name: amdchess-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amdchess-v4
This model is a fine-tuned version of [amd/AMD-Llama-135m](https://huggingface.co/amd/AMD-Llama-135m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use grokadamw with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 0.25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.9629 | 0.0030 | 5 | 5.6096 |
| 3.7446 | 0.0059 | 10 | 3.3680 |
| 2.524 | 0.0089 | 15 | 2.3223 |
| 1.9286 | 0.0118 | 20 | 1.7446 |
| 1.5475 | 0.0148 | 25 | 2.0681 |
| 1.2838 | 0.0177 | 30 | 1.4096 |
| 1.3152 | 0.0207 | 35 | 1.2730 |
| 1.2488 | 0.0236 | 40 | 1.2203 |
| 1.088 | 0.0266 | 45 | 1.1461 |
| 1.0479 | 0.0295 | 50 | 1.1139 |
| 1.0758 | 0.0325 | 55 | 1.0844 |
| 1.1275 | 0.0354 | 60 | 1.0443 |
| 1.1378 | 0.0384 | 65 | 1.0260 |
| 1.0147 | 0.0413 | 70 | 0.9939 |
| 0.993 | 0.0443 | 75 | 1.0074 |
| 1.0132 | 0.0472 | 80 | 0.9866 |
| 0.9155 | 0.0502 | 85 | 0.9697 |
| 0.9656 | 0.0531 | 90 | 0.9757 |
| 1.0402 | 0.0561 | 95 | 0.9633 |
| 0.9759 | 0.0590 | 100 | 0.9528 |
| 0.9505 | 0.0620 | 105 | 0.9501 |
| 1.0114 | 0.0649 | 110 | 0.9405 |
| 1.0182 | 0.0679 | 115 | 0.9212 |
| 0.9396 | 0.0708 | 120 | 0.9284 |
| 0.902 | 0.0738 | 125 | 0.9262 |
| 0.9533 | 0.0767 | 130 | 0.9121 |
| 0.8755 | 0.0797 | 135 | 0.9160 |
| 0.9349 | 0.0826 | 140 | 0.9083 |
| 0.9585 | 0.0856 | 145 | 0.8993 |
| 0.8349 | 0.0885 | 150 | 0.9000 |
| 0.9541 | 0.0915 | 155 | 0.8887 |
| 0.9108 | 0.0945 | 160 | 0.8837 |
| 0.9196 | 0.0974 | 165 | 0.8806 |
| 0.9094 | 0.1004 | 170 | 0.8776 |
| 0.8514 | 0.1033 | 175 | 0.8759 |
| 0.7515 | 0.1063 | 180 | 0.8684 |
| 0.8031 | 0.1092 | 185 | 0.8676 |
| 0.8639 | 0.1122 | 190 | 0.8661 |
| 0.8002 | 0.1151 | 195 | 0.8556 |
| 0.7812 | 0.1181 | 200 | 0.8574 |
| 0.9163 | 0.1210 | 205 | 0.8582 |
| 0.8824 | 0.1240 | 210 | 0.8515 |
| 0.8759 | 0.1269 | 215 | 0.8502 |
| 0.8384 | 0.1299 | 220 | 0.8467 |
| 0.8436 | 0.1328 | 225 | 0.8427 |
| 0.8329 | 0.1358 | 230 | 0.8398 |
| 0.87 | 0.1387 | 235 | 0.8393 |
| 0.8405 | 0.1417 | 240 | 0.8356 |
| 0.8634 | 0.1446 | 245 | 0.8339 |
| 0.8298 | 0.1476 | 250 | 0.8315 |
| 0.7582 | 0.1505 | 255 | 0.8278 |
| 0.7912 | 0.1535 | 260 | 0.8257 |
| 0.8878 | 0.1564 | 265 | 0.8247 |
| 0.8443 | 0.1594 | 270 | 0.8229 |
| 0.8965 | 0.1623 | 275 | 0.8206 |
| 0.8298 | 0.1653 | 280 | 0.8178 |
| 0.7496 | 0.1682 | 285 | 0.8177 |
| 0.7794 | 0.1712 | 290 | 0.8148 |
| 0.8354 | 0.1741 | 295 | 0.8137 |
| 0.8861 | 0.1771 | 300 | 0.8124 |
| 0.7683 | 0.1800 | 305 | 0.8118 |
| 0.8414 | 0.1830 | 310 | 0.8106 |
| 0.8624 | 0.1860 | 315 | 0.8083 |
| 0.7753 | 0.1889 | 320 | 0.8076 |
| 0.778 | 0.1919 | 325 | 0.8060 |
| 0.8171 | 0.1948 | 330 | 0.8051 |
| 0.7006 | 0.1978 | 335 | 0.8049 |
| 0.8365 | 0.2007 | 340 | 0.8032 |
| 0.8057 | 0.2037 | 345 | 0.8021 |
| 0.7914 | 0.2066 | 350 | 0.8015 |
| 0.9043 | 0.2096 | 355 | 0.8008 |
| 0.8317 | 0.2125 | 360 | 0.8001 |
| 0.7631 | 0.2155 | 365 | 0.7997 |
| 0.8301 | 0.2184 | 370 | 0.7993 |
| 0.8701 | 0.2214 | 375 | 0.7988 |
| 0.7469 | 0.2243 | 380 | 0.7985 |
| 0.7643 | 0.2273 | 385 | 0.7981 |
| 0.8388 | 0.2302 | 390 | 0.7978 |
| 0.8808 | 0.2332 | 395 | 0.7975 |
| 0.7441 | 0.2361 | 400 | 0.7974 |
| 0.7641 | 0.2391 | 405 | 0.7972 |
| 0.727 | 0.2420 | 410 | 0.7971 |
| 0.771 | 0.2450 | 415 | 0.7971 |
| 0.7442 | 0.2479 | 420 | 0.7971 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
drahmel/Daredevil-8B-abliterated-story | drahmel | 2024-10-27T20:13:52Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T20:05:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF | mradermacher | 2024-10-27T20:04:07Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B",
"base_model:quantized:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-10-27T16:21:26Z | ---
base_model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q4_0.gguf) | i1-Q4_0 | 13.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B-i1-GGUF/resolve/main/MN-GRAND-Gutenberg-Lyra4-Lyra-23.5B.i1-Q6_K.gguf) | i1-Q6_K | 19.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LuisMG2/iabd_model | LuisMG2 | 2024-10-27T20:00:14Z | 6 | 0 | null | [
"pytorch",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-10-27T09:44:38Z | ---
license: cc-by-nc-nd-4.0
---
tags:
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats |
nlpguy/amdchess-v3 | nlpguy | 2024-10-27T19:59:32Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:reflex-ai/AMD-Llama-350M-Upgraded",
"base_model:finetune:reflex-ai/AMD-Llama-350M-Upgraded",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T18:02:37Z | ---
library_name: transformers
license: apache-2.0
base_model: reflex-ai/AMD-Llama-350M-Upgraded
tags:
- generated_from_trainer
model-index:
- name: amdchess-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amdchess-v3
This model is a fine-tuned version of [reflex-ai/AMD-Llama-350M-Upgraded](https://huggingface.co/reflex-ai/AMD-Llama-350M-Upgraded) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 0.25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.6481 | 0.0030 | 5 | 7.3246 |
| 7.1045 | 0.0059 | 10 | 6.8823 |
| 6.5856 | 0.0089 | 15 | 6.5701 |
| 6.1701 | 0.0118 | 20 | 6.0976 |
| 5.7428 | 0.0148 | 25 | 5.7033 |
| 5.6064 | 0.0177 | 30 | 5.3915 |
| 5.096 | 0.0207 | 35 | 4.9774 |
| 4.6607 | 0.0236 | 40 | 4.6606 |
| 4.4224 | 0.0266 | 45 | 4.3904 |
| 4.2617 | 0.0295 | 50 | 4.1209 |
| 4.0037 | 0.0325 | 55 | 3.9065 |
| 3.8326 | 0.0354 | 60 | 3.7226 |
| 3.5859 | 0.0384 | 65 | 3.5654 |
| 3.5209 | 0.0413 | 70 | 3.3901 |
| 3.2487 | 0.0443 | 75 | 3.2572 |
| 3.111 | 0.0472 | 80 | 3.0276 |
| 2.8844 | 0.0502 | 85 | 2.8643 |
| 2.7695 | 0.0531 | 90 | 2.7651 |
| 2.7369 | 0.0561 | 95 | 2.6283 |
| 2.4932 | 0.0590 | 100 | 2.5018 |
| 2.3424 | 0.0620 | 105 | 2.3886 |
| 2.3822 | 0.0649 | 110 | 2.3002 |
| 2.1709 | 0.0679 | 115 | 2.1980 |
| 2.0245 | 0.0708 | 120 | 2.1401 |
| 2.0681 | 0.0738 | 125 | 2.0873 |
| 2.0483 | 0.0767 | 130 | 2.0304 |
| 2.1128 | 0.0797 | 135 | 1.9849 |
| 1.9851 | 0.0826 | 140 | 1.9261 |
| 1.8878 | 0.0856 | 145 | 1.8993 |
| 1.9144 | 0.0885 | 150 | 1.8522 |
| 1.8315 | 0.0915 | 155 | 1.8441 |
| 1.8331 | 0.0945 | 160 | 1.8086 |
| 1.6939 | 0.0974 | 165 | 1.7622 |
| 1.7247 | 0.1004 | 170 | 1.7290 |
| 1.7578 | 0.1033 | 175 | 1.7001 |
| 1.7665 | 0.1063 | 180 | 1.6987 |
| 1.6891 | 0.1092 | 185 | 1.6677 |
| 1.5931 | 0.1122 | 190 | 1.6512 |
| 1.6587 | 0.1151 | 195 | 1.6247 |
| 1.6703 | 0.1181 | 200 | 1.6061 |
| 1.5718 | 0.1210 | 205 | 1.5952 |
| 1.6414 | 0.1240 | 210 | 1.5690 |
| 1.5659 | 0.1269 | 215 | 1.5563 |
| 1.7055 | 0.1299 | 220 | 1.5354 |
| 1.5557 | 0.1328 | 225 | 1.5216 |
| 1.526 | 0.1358 | 230 | 1.5040 |
| 1.5513 | 0.1387 | 235 | 1.4986 |
| 1.4993 | 0.1417 | 240 | 1.4960 |
| 1.5187 | 0.1446 | 245 | 1.4842 |
| 1.4945 | 0.1476 | 250 | 1.4721 |
| 1.4969 | 0.1505 | 255 | 1.4705 |
| 1.4805 | 0.1535 | 260 | 1.4485 |
| 1.3945 | 0.1564 | 265 | 1.4433 |
| 1.4712 | 0.1594 | 270 | 1.4359 |
| 1.4197 | 0.1623 | 275 | 1.4292 |
| 1.4211 | 0.1653 | 280 | 1.4243 |
| 1.2673 | 0.1682 | 285 | 1.4238 |
| 1.4609 | 0.1712 | 290 | 1.4490 |
| 1.4633 | 0.1741 | 295 | 1.4193 |
| 1.4171 | 0.1771 | 300 | 1.4049 |
| 1.4011 | 0.1800 | 305 | 1.4024 |
| 1.2451 | 0.1830 | 310 | 1.3998 |
| 1.5563 | 0.1860 | 315 | 1.3952 |
| 1.3135 | 0.1889 | 320 | 1.3910 |
| 1.4269 | 0.1919 | 325 | 1.3905 |
| 1.3852 | 0.1948 | 330 | 1.3868 |
| 1.4691 | 0.1978 | 335 | 1.3806 |
| 1.4233 | 0.2007 | 340 | 1.3768 |
| 1.3279 | 0.2037 | 345 | 1.3780 |
| 1.3566 | 0.2066 | 350 | 1.3721 |
| 1.4463 | 0.2096 | 355 | 1.3688 |
| 1.3598 | 0.2125 | 360 | 1.3696 |
| 1.4411 | 0.2155 | 365 | 1.3668 |
| 1.3842 | 0.2184 | 370 | 1.3663 |
| 1.2909 | 0.2214 | 375 | 1.3654 |
| 1.3835 | 0.2243 | 380 | 1.3647 |
| 1.4124 | 0.2273 | 385 | 1.3619 |
| 1.3389 | 0.2302 | 390 | 1.3625 |
| 1.4634 | 0.2332 | 395 | 1.3609 |
| 1.2831 | 0.2361 | 400 | 1.3602 |
| 1.2724 | 0.2391 | 405 | 1.3599 |
| 1.3864 | 0.2420 | 410 | 1.3596 |
| 1.3273 | 0.2450 | 415 | 1.3595 |
| 1.3081 | 0.2479 | 420 | 1.3595 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_narratives09_fold5 | g-assismoraes | 2024-10-27T19:58:32Z | 161 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T19:54:26Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_fold5
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0227
- Precision Samples: 0.3630
- Recall Samples: 0.7663
- F1 Samples: 0.4583
- Precision Macro: 0.6929
- Recall Macro: 0.5586
- F1 Macro: 0.3787
- Precision Micro: 0.3170
- Recall Micro: 0.7293
- F1 Micro: 0.4419
- Precision Weighted: 0.4618
- Recall Weighted: 0.7293
- F1 Weighted: 0.4006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.5606 | 1.0 | 19 | 5.1743 | 1.0 | 0.0 | 0.0 | 1.0 | 0.1429 | 0.1429 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 4.8513 | 2.0 | 38 | 4.9270 | 0.2759 | 0.2532 | 0.2276 | 0.9372 | 0.2238 | 0.1869 | 0.2865 | 0.2068 | 0.2402 | 0.8398 | 0.2068 | 0.1101 |
| 5.1086 | 3.0 | 57 | 4.6316 | 0.3810 | 0.4853 | 0.3601 | 0.8763 | 0.3242 | 0.2396 | 0.3420 | 0.4474 | 0.3876 | 0.6961 | 0.4474 | 0.2403 |
| 4.5134 | 4.0 | 76 | 4.4138 | 0.3413 | 0.6266 | 0.4146 | 0.7828 | 0.4166 | 0.2917 | 0.3196 | 0.5827 | 0.4128 | 0.5521 | 0.5827 | 0.3108 |
| 4.3876 | 5.0 | 95 | 4.2907 | 0.3599 | 0.6644 | 0.4357 | 0.7174 | 0.4444 | 0.3230 | 0.3259 | 0.6015 | 0.4227 | 0.4753 | 0.6015 | 0.3464 |
| 4.084 | 6.0 | 114 | 4.1465 | 0.3372 | 0.7364 | 0.4312 | 0.7116 | 0.5145 | 0.3409 | 0.2987 | 0.7030 | 0.4193 | 0.4704 | 0.7030 | 0.3684 |
| 3.9969 | 7.0 | 133 | 4.0975 | 0.3583 | 0.7479 | 0.4546 | 0.7007 | 0.5368 | 0.3753 | 0.3198 | 0.7105 | 0.4411 | 0.4677 | 0.7105 | 0.3978 |
| 3.9677 | 8.0 | 152 | 4.0623 | 0.3605 | 0.7543 | 0.4564 | 0.6912 | 0.5472 | 0.3758 | 0.3220 | 0.7105 | 0.4431 | 0.4631 | 0.7105 | 0.3995 |
| 4.0107 | 9.0 | 171 | 4.0401 | 0.3565 | 0.7571 | 0.4538 | 0.6965 | 0.5523 | 0.3805 | 0.3188 | 0.7143 | 0.4408 | 0.4649 | 0.7143 | 0.4006 |
| 3.9591 | 10.0 | 190 | 4.0227 | 0.3630 | 0.7663 | 0.4583 | 0.6929 | 0.5586 | 0.3787 | 0.3170 | 0.7293 | 0.4419 | 0.4618 | 0.7293 | 0.4006 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
mradermacher/MS-Schisandra-22B-vA-i1-GGUF | mradermacher | 2024-10-27T19:57:08Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-27T16:24:08Z | ---
base_model: Nohobby/MS-Schisandra-22B-vA
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nohobby/MS-Schisandra-22B-vA
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-Schisandra-22B-vA-i1-GGUF/resolve/main/MS-Schisandra-22B-vA.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
g-assismoraes/mdeberta-semeval25_narratives09_fold4 | g-assismoraes | 2024-10-27T19:54:22Z | 196 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T19:50:39Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_fold4
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7685
- Precision Samples: 0.3724
- Recall Samples: 0.7791
- F1 Samples: 0.4660
- Precision Macro: 0.6802
- Recall Macro: 0.4995
- F1 Macro: 0.2745
- Precision Micro: 0.3076
- Recall Micro: 0.7647
- F1 Micro: 0.4387
- Precision Weighted: 0.4736
- Recall Weighted: 0.7647
- F1 Weighted: 0.3979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.7927 | 1.0 | 19 | 4.9876 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0476 | 0.0476 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 5.0899 | 2.0 | 38 | 4.7739 | 0.3023 | 0.3386 | 0.2905 | 0.8797 | 0.1700 | 0.1306 | 0.316 | 0.3098 | 0.3129 | 0.7069 | 0.3098 | 0.2068 |
| 5.184 | 3.0 | 57 | 4.4531 | 0.3310 | 0.4776 | 0.3705 | 0.8491 | 0.2311 | 0.1455 | 0.3304 | 0.4471 | 0.38 | 0.6518 | 0.4471 | 0.2363 |
| 4.8172 | 4.0 | 76 | 4.2540 | 0.3585 | 0.6171 | 0.4157 | 0.7777 | 0.3401 | 0.2009 | 0.2955 | 0.5922 | 0.3943 | 0.5605 | 0.5922 | 0.3170 |
| 4.6123 | 5.0 | 95 | 4.0275 | 0.3880 | 0.6493 | 0.4406 | 0.7328 | 0.3521 | 0.2096 | 0.3224 | 0.6157 | 0.4232 | 0.5172 | 0.6157 | 0.3372 |
| 4.4261 | 6.0 | 114 | 3.9283 | 0.3893 | 0.7197 | 0.4591 | 0.7160 | 0.4256 | 0.2490 | 0.3076 | 0.7020 | 0.4277 | 0.4984 | 0.7020 | 0.3797 |
| 4.0921 | 7.0 | 133 | 3.8476 | 0.3760 | 0.7710 | 0.4677 | 0.6844 | 0.4849 | 0.2771 | 0.3153 | 0.7529 | 0.4444 | 0.4774 | 0.7529 | 0.4014 |
| 4.1832 | 8.0 | 152 | 3.7974 | 0.3744 | 0.7932 | 0.4738 | 0.6823 | 0.4933 | 0.2773 | 0.3166 | 0.7647 | 0.4478 | 0.4787 | 0.7647 | 0.4061 |
| 4.3611 | 9.0 | 171 | 3.7819 | 0.3743 | 0.7825 | 0.4678 | 0.6819 | 0.4981 | 0.2763 | 0.3095 | 0.7647 | 0.4407 | 0.4758 | 0.7647 | 0.4006 |
| 3.945 | 10.0 | 190 | 3.7685 | 0.3724 | 0.7791 | 0.4660 | 0.6802 | 0.4995 | 0.2745 | 0.3076 | 0.7647 | 0.4387 | 0.4736 | 0.7647 | 0.3979 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_narratives09_fold3 | g-assismoraes | 2024-10-27T19:50:34Z | 161 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T19:45:54Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_fold3
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2001
- Precision Samples: 0.3657
- Recall Samples: 0.7451
- F1 Samples: 0.4607
- Precision Macro: 0.6982
- Recall Macro: 0.4621
- F1 Macro: 0.2860
- Precision Micro: 0.3270
- Recall Micro: 0.6974
- F1 Micro: 0.4452
- Precision Weighted: 0.4844
- Recall Weighted: 0.6974
- F1 Weighted: 0.3863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.6486 | 1.0 | 19 | 5.3335 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0476 | 0.0476 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 5.1543 | 2.0 | 38 | 5.1482 | 0.2989 | 0.3545 | 0.2947 | 0.8737 | 0.1754 | 0.1269 | 0.2960 | 0.3026 | 0.2993 | 0.7101 | 0.3026 | 0.1830 |
| 4.8675 | 3.0 | 57 | 4.9437 | 0.2764 | 0.4597 | 0.3267 | 0.8661 | 0.2223 | 0.1320 | 0.2835 | 0.3985 | 0.3313 | 0.6942 | 0.3985 | 0.1930 |
| 4.5144 | 4.0 | 76 | 4.6737 | 0.3513 | 0.6045 | 0.4080 | 0.7918 | 0.3051 | 0.2033 | 0.3198 | 0.5240 | 0.3972 | 0.5901 | 0.5240 | 0.2991 |
| 4.6334 | 5.0 | 95 | 4.4861 | 0.3436 | 0.6636 | 0.4219 | 0.7584 | 0.3706 | 0.2294 | 0.3035 | 0.6015 | 0.4035 | 0.5513 | 0.6015 | 0.3222 |
| 4.4156 | 6.0 | 114 | 4.3417 | 0.3529 | 0.7394 | 0.4447 | 0.7163 | 0.4305 | 0.2534 | 0.3129 | 0.6790 | 0.4284 | 0.4923 | 0.6790 | 0.3581 |
| 3.9776 | 7.0 | 133 | 4.2836 | 0.3659 | 0.7371 | 0.4542 | 0.7193 | 0.4290 | 0.2548 | 0.3183 | 0.6753 | 0.4326 | 0.4993 | 0.6753 | 0.3622 |
| 4.0482 | 8.0 | 152 | 4.2803 | 0.3560 | 0.7061 | 0.4386 | 0.7124 | 0.4265 | 0.2660 | 0.3201 | 0.6568 | 0.4305 | 0.4918 | 0.6568 | 0.3668 |
| 4.0709 | 9.0 | 171 | 4.1972 | 0.3717 | 0.7443 | 0.4602 | 0.7075 | 0.4553 | 0.2830 | 0.3209 | 0.6974 | 0.4395 | 0.4898 | 0.6974 | 0.3834 |
| 4.3494 | 10.0 | 190 | 4.2001 | 0.3657 | 0.7451 | 0.4607 | 0.6982 | 0.4621 | 0.2860 | 0.3270 | 0.6974 | 0.4452 | 0.4844 | 0.6974 | 0.3863 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_narratives09_fold2 | g-assismoraes | 2024-10-27T19:45:49Z | 161 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T19:41:42Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_fold2
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2915
- Precision Samples: 0.3850
- Recall Samples: 0.7226
- F1 Samples: 0.4627
- Precision Macro: 0.7130
- Recall Macro: 0.4503
- F1 Macro: 0.2846
- Precision Micro: 0.3282
- Recall Micro: 0.6957
- F1 Micro: 0.4460
- Precision Weighted: 0.4983
- Recall Weighted: 0.6957
- F1 Weighted: 0.3925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.4789 | 1.0 | 19 | 5.4030 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0476 | 0.0476 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 5.2627 | 2.0 | 38 | 5.1901 | 0.2839 | 0.3351 | 0.2805 | 0.9014 | 0.1655 | 0.1112 | 0.2952 | 0.2899 | 0.2925 | 0.7607 | 0.2899 | 0.1520 |
| 4.6993 | 3.0 | 57 | 5.0001 | 0.3075 | 0.4274 | 0.3272 | 0.8700 | 0.2042 | 0.1344 | 0.3164 | 0.3841 | 0.3470 | 0.6843 | 0.3841 | 0.2124 |
| 4.5547 | 4.0 | 76 | 4.7741 | 0.3603 | 0.5142 | 0.3949 | 0.8024 | 0.2616 | 0.1705 | 0.3290 | 0.4601 | 0.3837 | 0.5941 | 0.4601 | 0.2529 |
| 4.2228 | 5.0 | 95 | 4.5899 | 0.3432 | 0.6239 | 0.4110 | 0.7733 | 0.3356 | 0.2028 | 0.3165 | 0.5688 | 0.4067 | 0.5551 | 0.5688 | 0.3071 |
| 4.0369 | 6.0 | 114 | 4.4640 | 0.3575 | 0.6764 | 0.4282 | 0.7161 | 0.3926 | 0.2391 | 0.3084 | 0.6413 | 0.4165 | 0.4951 | 0.6413 | 0.3492 |
| 4.0052 | 7.0 | 133 | 4.3708 | 0.3529 | 0.6907 | 0.4313 | 0.7169 | 0.4237 | 0.2521 | 0.3088 | 0.6703 | 0.4229 | 0.4941 | 0.6703 | 0.3594 |
| 3.8847 | 8.0 | 152 | 4.3291 | 0.3645 | 0.7105 | 0.4445 | 0.7205 | 0.4312 | 0.2569 | 0.3170 | 0.6812 | 0.4327 | 0.5006 | 0.6812 | 0.3678 |
| 3.8223 | 9.0 | 171 | 4.3064 | 0.3676 | 0.7080 | 0.4457 | 0.7196 | 0.4326 | 0.2643 | 0.3160 | 0.6812 | 0.4317 | 0.4985 | 0.6812 | 0.3716 |
| 4.3457 | 10.0 | 190 | 4.2915 | 0.3850 | 0.7226 | 0.4627 | 0.7130 | 0.4503 | 0.2846 | 0.3282 | 0.6957 | 0.4460 | 0.4983 | 0.6957 | 0.3925 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
Mahmoud3899/reason_new | Mahmoud3899 | 2024-10-27T19:44:15Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T16:01:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
g-assismoraes/mdeberta-semeval25_narratives09_fold1 | g-assismoraes | 2024-10-27T19:41:37Z | 161 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T19:37:17Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_narratives09_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_narratives09_fold1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1440
- Precision Samples: 0.3489
- Recall Samples: 0.7666
- F1 Samples: 0.4484
- Precision Macro: 0.6713
- Recall Macro: 0.4701
- F1 Macro: 0.2642
- Precision Micro: 0.3133
- Recall Micro: 0.7518
- F1 Micro: 0.4423
- Precision Weighted: 0.4454
- Recall Weighted: 0.7518
- F1 Weighted: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 5.3976 | 1.0 | 19 | 5.3094 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0476 | 0.0476 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 5.0729 | 2.0 | 38 | 5.0051 | 0.2991 | 0.4812 | 0.3465 | 0.8683 | 0.2245 | 0.1355 | 0.3056 | 0.4496 | 0.3639 | 0.6682 | 0.4496 | 0.2244 |
| 4.799 | 3.0 | 57 | 4.7268 | 0.3634 | 0.5035 | 0.3759 | 0.8348 | 0.2364 | 0.1574 | 0.3291 | 0.4640 | 0.3851 | 0.6206 | 0.4640 | 0.2617 |
| 4.4077 | 4.0 | 76 | 4.5072 | 0.3846 | 0.6225 | 0.4435 | 0.7933 | 0.3190 | 0.2043 | 0.3383 | 0.5755 | 0.4261 | 0.5591 | 0.5755 | 0.3232 |
| 4.1905 | 5.0 | 95 | 4.3919 | 0.4006 | 0.6444 | 0.4575 | 0.7484 | 0.3320 | 0.2140 | 0.3395 | 0.5935 | 0.4319 | 0.5242 | 0.5935 | 0.3411 |
| 4.1939 | 6.0 | 114 | 4.2724 | 0.3817 | 0.7296 | 0.4634 | 0.7094 | 0.4205 | 0.2478 | 0.3229 | 0.7050 | 0.4429 | 0.4663 | 0.7050 | 0.3791 |
| 3.9286 | 7.0 | 133 | 4.2600 | 0.3753 | 0.7336 | 0.4620 | 0.6853 | 0.4257 | 0.2568 | 0.3311 | 0.7050 | 0.4506 | 0.4556 | 0.7050 | 0.3882 |
| 3.8896 | 8.0 | 152 | 4.1871 | 0.3528 | 0.7581 | 0.4505 | 0.6713 | 0.4559 | 0.2625 | 0.3188 | 0.7374 | 0.4452 | 0.4462 | 0.7374 | 0.3929 |
| 3.993 | 9.0 | 171 | 4.1598 | 0.3525 | 0.7629 | 0.4503 | 0.6712 | 0.4645 | 0.2639 | 0.3170 | 0.7446 | 0.4447 | 0.4443 | 0.7446 | 0.3920 |
| 4.1424 | 10.0 | 190 | 4.1440 | 0.3489 | 0.7666 | 0.4484 | 0.6713 | 0.4701 | 0.2642 | 0.3133 | 0.7518 | 0.4423 | 0.4454 | 0.7518 | 0.3929 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
Viscoke/caf3 | Viscoke | 2024-10-27T19:35:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T19:32:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanwen1232/bert-finetuned-ner | hanwen1232 | 2024-10-27T19:30:06Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-27T18:56:49Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1749
- Precision: 0.5782
- Recall: 0.6635
- F1: 0.6179
- Accuracy: 0.9548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 249 | 0.2258 | 0.4744 | 0.6031 | 0.5311 | 0.9355 |
| No log | 2.0 | 498 | 0.2214 | 0.5604 | 0.6170 | 0.5873 | 0.9446 |
| 0.2066 | 3.0 | 747 | 0.2324 | 0.5223 | 0.6499 | 0.5792 | 0.9414 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.4.1+cpu
- Datasets 3.0.2
- Tokenizers 0.20.1
|
BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L2 | BEE-spoke-data | 2024-10-27T19:26:12Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"gqa",
"instruct",
"en",
"dataset:pszemraj/infinity-instruct-7m-T2T_en",
"base_model:BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L1",
"base_model:finetune:BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-25T14:57:28Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L1
tags:
- gqa
- t5
- instruct
datasets:
- pszemraj/infinity-instruct-7m-T2T_en
pipeline_tag: text2text-generation
---
# tFINE-680m-e32-d16-infinity_instruct-L2
this is an instruction-tuned version of a pretrained t5 with GQA.
## Model description
This model is a fine-tuned version of [BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L1](https://huggingface.co/BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L1) on the pszemraj/infinity-instruct-7m-T2T_en dataset (config `deduped-L2`).
It achieves the following results on the evaluation set:
- Loss: 1.3139
- Num Input Tokens Seen: 361724696
## usage
prerequisite: you need to have [t5-gqa fork of transformers installed](https://huggingface.co/BEE-spoke-data/tFINE-680m-e32-d16-gqa-flan#testing), and accelerate.
```py
from transformers import pipeline
pipe = pipeline(
"text2text-generation",
model="BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L2",
device_map="auto",
)
prompt = "Write me a python fn that demonstrates an advanced sorting algorithm"
res = pipe(
prompt, max_new_tokens=384, num_beams=4, early_stopping=True, repetition_penalty=1.1
)
print(res[0]["generated_text"])
```
## Quick eval
Quick eval for: `BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L2`
hf (pretrained=BEE-spoke-data/tFINE-680m-e32-d16-infinity_instruct-L2,trust_remote_code=True,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-------------|------:|------|-----:|--------|---|-----:|---|------|
|boolq | 2|none | 0|acc |↑ |0.6364|± |0.0084|
|openbookqa | 1|none | 0|acc |↑ |0.1480|± |0.0159|
| | |none | 0|acc_norm|↑ |0.2860|± |0.0202|
|piqa | 1|none | 0|acc |↑ |0.6083|± |0.0114|
| | |none | 0|acc_norm|↑ |0.6132|± |0.0114|
|social_iqa | 0|none | 0|acc |↑ |0.3854|± |0.0110|
|tinyArc | 0|none | 25|acc_norm|↑ |0.3122|± | N/A|
|tinyHellaswag| 0|none | 10|acc_norm|↑ |0.3356|± | N/A|
|tinyMMLU | 0|none | 0|acc_norm|↑ |0.2793|± | N/A|
|winogrande | 1|none | 0|acc |↑ |0.5201|± |0.0140|
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17868
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 8
- optimizer: Use paged_ademamix_32bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| 1.4008 | 0.2534 | 1000 | 1.4020 | 91375832 |
| 1.3456 | 0.5068 | 2000 | 1.3669 | 182939052 |
| 1.3437 | 0.7602 | 3000 | 1.3378 | 274855796 | |
MiniLLM/MiniPLM-Qwen-200M | MiniLLM | 2024-10-27T19:19:09Z | 248 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:monology/pile-uncopyrighted",
"dataset:MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5",
"arxiv:2410.17215",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-17T23:28:00Z | ---
library_name: transformers
license: apache-2.0
datasets:
- monology/pile-uncopyrighted
- MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
---
# MinPLM-Qwen-200M
[paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
**MiniPLM-Qwen-200M** is a 200M model with Qwen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
</p>
## Evaluation
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/EOYzajQcwQFT5PobqL3j0.png" width="1000">
</p>
## Baseline Models
+ [Conventional Pre-Training](https://huggingface.co/MiniLLM/Pretrain-Qwen-200M)
+ [VanillaKD](https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-200M)
## Citation
```bibtex
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}
``` |
zeeshan73/Text2SQL_mistral_7b_cosine_lr | zeeshan73 | 2024-10-27T19:18:34Z | 11 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-10-27T14:02:25Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
datasets:
- generator
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistral_7b_cosine_lr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_7b_cosine_lr
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- lr_scheduler_warmup_steps: 15
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.1885 | 0.0549 | 10 | 61.4970 |
| 37.6512 | 0.1098 | 20 | 12.9405 |
| 14.576 | 0.1647 | 30 | 27.9852 |
| 9.5892 | 0.2196 | 40 | 6.4722 |
| 7.7639 | 0.2745 | 50 | 6.8158 |
| 6.3878 | 0.3294 | 60 | 6.3811 |
| 6.6118 | 0.3844 | 70 | 5.9281 |
| 6.006 | 0.4393 | 80 | 5.6753 |
| 6.1011 | 0.4942 | 90 | 5.8083 |
| 5.7396 | 0.5491 | 100 | 5.6193 |
| 5.5128 | 0.6040 | 110 | 5.4848 |
| 5.4599 | 0.6589 | 120 | 5.4267 |
| 5.5193 | 0.7138 | 130 | 5.4757 |
| 5.4488 | 0.7687 | 140 | 5.4422 |
| 5.4257 | 0.8236 | 150 | 5.3845 |
| 5.3938 | 0.8785 | 160 | 5.3727 |
| 5.3937 | 0.9334 | 170 | 5.3646 |
| 5.3916 | 0.9883 | 180 | 5.4825 |
| 5.4217 | 1.0432 | 190 | 5.3534 |
| 5.3915 | 1.0981 | 200 | 5.3497 |
| 5.3656 | 1.1531 | 210 | 5.3416 |
| 5.3718 | 1.2080 | 220 | 5.3691 |
| 5.3763 | 1.2629 | 230 | 5.4102 |
| 5.4039 | 1.3178 | 240 | 5.3993 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0 |
Subsets and Splits