modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
jiangchengchengNLP/opt-6.7b-MINMAX | jiangchengchengNLP | 2024-10-21T11:18:32Z | 86 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2024-10-21T11:16:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf | RichardErkhov | 2024-10-21T11:17:46Z | 250 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T11:06:32Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q2_K.gguf) | Q2_K | 0.52GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K.gguf) | Q3_K | 0.66GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_0.gguf) | Q4_0 | 0.72GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_K.gguf) | Q4_K | 0.81GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q4_1.gguf) | Q4_1 | 0.8GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_0.gguf) | Q5_0 | 0.87GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_K.gguf) | Q5_K | 0.93GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q5_1.gguf) | Q5_1 | 0.95GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q6_K.gguf) | Q6_K | 1.09GB |
| [asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-500k-2ep-tokenizer-x86-O0-arm-gnueabi-gcc
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf | RichardErkhov | 2024-10-21T11:15:36Z | 247 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T11:04:14Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
asm2asm-deepseek-1.3b-100k-arm-x86-O2 - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/asm2asm-deepseek-1.3b-100k-arm-x86-O2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q2_K.gguf) | Q2_K | 0.52GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K.gguf) | Q3_K | 0.66GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_0.gguf) | Q4_0 | 0.72GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_K.gguf) | Q4_K | 0.81GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q4_1.gguf) | Q4_1 | 0.8GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_0.gguf) | Q5_0 | 0.87GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_K.gguf) | Q5_K | 0.93GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q5_1.gguf) | Q5_1 | 0.95GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q6_K.gguf) | Q6_K | 1.09GB |
| [asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-arm-x86-O2-gguf/blob/main/asm2asm-deepseek-1.3b-100k-arm-x86-O2.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-100k-arm-x86-O2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-100k-arm-x86-O2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf | RichardErkhov | 2024-10-21T11:15:02Z | 247 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T11:04:04Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q2_K.gguf) | Q2_K | 0.52GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K.gguf) | Q3_K | 0.66GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_0.gguf) | Q4_0 | 0.72GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_K.gguf) | Q4_K | 0.81GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q4_1.gguf) | Q4_1 | 0.8GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_0.gguf) | Q5_0 | 0.87GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_K.gguf) | Q5_K | 0.93GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q5_1.gguf) | Q5_1 | 0.95GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q6_K.gguf) | Q6_K | 1.09GB |
| [asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc-gguf/blob/main/asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-100k-x86-O2-arm-gnueabi-gcc
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
GrennKren/magnum-v4-22b-4bit | GrennKren | 2024-10-21T11:09:05Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-21T11:05:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RamsesDIIP/me5-large-construction-esp-cat-v2 | RamsesDIIP | 2024-10-21T11:02:20Z | 13 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10590",
"loss:MultipleNegativesRankingLoss",
"multilingual",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-21T11:00:28Z | ---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10590
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'query: Silla de listones de madera tropical, de 68 cm de longitud,
con protección fungicida insecticida e hidrófuga, acabado incoloro, con respaldo,
soportes y apoyabrazos de madera, colocada con fijaciones mecánicas'
sentences:
- 'passage: Pavimento de losas de concreto de diseño irregular con bordes rectos,
de 10 cm de grosor, a un costo elevado, instaladas con mezcla de cemento en proporción
1:4, en un área urbana con fácil acceso, en aceras de más de 3 y hasta 5 m de
ancho o en calzadas/plataformas únicas de más de 7 y hasta 12 m de ancho, considerando
la interferencia de servicios o mobiliario urbano, en proyectos de hasta 1 m2.'
- 'passage: Silla de listones de madera reciclada, de 68 cm de longitud, con tratamiento
anti-UV y resistente a la humedad, acabado mate, sin respaldo, soportes y apoyabrazos
de metal, ensamblada con adhesivos estructurales.'
- 'passage: Silla de madera tropical de 68 cm de largo, tratada con protección contra
hongos e insectos, con un acabado transparente, que incluye respaldo, soportes
y reposabrazos de madera, ensamblada con anclajes mecánicos.'
- source_sentence: 'query: Radiador de pie con intercambiador de calor interno sencillo
de tubo de cobre con aletas de aluminio y de bajo caudal de agua, con armazón
de plancha de acero antivandálica lacado de 20 a 25 cm de ancho 65 a 75 cm de
altura y 125 a 145 cm de longitud, con purgador, tapon de vaciado y elementos
de montaje, colocado de pie'
sentences:
- 'passage: Radiador vertical con intercambiador de calor de tubo de cobre y aletas
de aluminio, diseñado para un bajo flujo de agua, con estructura de acero lacado
resistente al vandalismo, dimensiones de 20 a 25 cm de ancho, 65 a 75 cm de alto
y 125 a 145 cm de largo, equipado con purgador, tapón de drenaje y accesorios
de instalación.'
- 'passage: Formigonament de rases i pous, utilitzant formigó HA - 35 / B / 10 /
XC4 + XA1 amb una dosi de ciment de 325 kg/m3 i una relació aigua-ciment no superior
a 0.5, abocat mitjançant cubilot.'
- 'passage: Radiador de pared con intercambiador de calor externo complejo de tubo
de acero inoxidable sin aletas y de alto caudal de agua, con estructura de aluminio
anodizado de 30 a 35 cm de ancho, 70 a 80 cm de altura y 130 a 150 cm de longitud,
sin purgador, tapón de llenado y elementos de desmontaje, instalado en posición
horizontal.'
- source_sentence: 'query: Formación de forjado aligerado de espesor 10+15+5 cm con
un canto total de 30 cm y de 163 a 353 kN·m de momento flector último, para una
sobrecarga (uso+permanentes) de 4 a 5 kN/m2, formado por semiplaca de hormigón
pretensado (prelosa) de 10 cm de espesor, 120 a 250 cm de anchura y de luz máx.
8 m, bovedilla poliestireno de 15 cm de altura y capa de compresión de 5 cm de
espesor, con una cuantía de 3 kg/ m2 de armadura AP500 S de acero en barras corrugadas,
armadura AP500 T en mallas electrosoldadas de 15x15 cm y D 6 mm, y una cuantia
de hormigón 0.086 m3/m2 hormigón para armar HA - 30 / F / 20 / XC4 + XD3 con una
cantidad de cemento de 325 kg/m3 i relación agua cemento =< 0.5, vertido con cubilote'
sentences:
- 'passage: Pavimento de mezcla asfáltica continua en caliente tipo AC 22 base B
35/50 G de baja temperatura, utilizando betún asfáltico de penetración y granulometría
gruesa para la capa base, reciclado de mezcla bituminosa y árido calcáreo, aplicado
y compactado en un entorno urbano con movilidad restringida, en aceras de más
de 3 y hasta 5 m de ancho o calzada/plataforma única de más de 7 y hasta 12 m
de ancho, afectado por servicios o elementos de mobiliario urbano, en proyectos
que superan los 10 m2.'
- 'passage: Formación de forjado macizo de espesor 20 cm con un canto total de 40
cm y de 200 a 400 kN·m de momento flector último, para una sobrecarga (uso+permanentes)
de 6 a 7 kN/m2, formado por losas de hormigón armado de 20 cm de espesor, 150
a 300 cm de anchura y de luz máx. 10 m, sin bovedilla, y capa de compresión de
10 cm de espesor, con una cuantía de 4 kg/m2 de armadura AP600 S de acero en barras
corrugadas, armadura AP600 T en mallas electrosoldadas de 20x20 cm y D 8 mm, y
una cuantía de hormigón 0.1 m3/m2 hormigón para armar HA - 35 / F / 25 / XC3 +
XD2 con una cantidad de cemento de 350 kg/m3 y relación agua cemento =< 0.45,
vertido con bomba.'
- 'passage: Construcción de un forjado aligerado con un espesor total de 30 cm,
compuesto por una semiplaca de hormigón pretensado de 10 cm, bovedillas de poliestireno
de 15 cm y una capa de compresión de 5 cm, diseñado para soportar momentos flectores
de 163 a 353 kN·m y una sobrecarga de 4 a 5 kN/m2, utilizando armadura de acero
AP500 S y AP500 T, y hormigón HA - 30 con una dosificación de cemento de 325 kg/m3.'
- source_sentence: 'query: Hormigonado para encepados, con hormigón para armar con
aditivo hidrófugo HA - 35 / B / 10 / XC4 + XD3 con una cantidad de cemento de
325 kg/m3 i relación agua cemento =< 0.5, colocado con bomba en obras de ingeniería
civil'
sentences:
- 'passage: Colocación de hormigón para cimentaciones, utilizando mezcla con aditivo
impermeabilizante HA - 35 / B / 10 / XC4 + XD3, con una dosificación de cemento
de 325 kg/m3 y una relación agua-cemento menor o igual a 0.5, aplicado mediante
bomba en proyectos de infraestructura.'
- 'passage: Coberta amb panell sandvitx d''acer amb aïllament de poliisocianurat
(PIR) que presenta millores en la resistència al foc, amb un gruix total de 40
mm, cara exterior nervada en color estàndard, diferent del blanc, i cara interior
llisa, prelacada a les dues cares, gruix de les planxes (ext/int) 0.6/0.4 mm,
junt longitudinal encadenat, amb fixació oculta mitjançant tapajunts, amb un pendent
mínim del 4%.'
- 'passage: Hormigonado para fundaciones, con hormigón para estructuras con aditivo
retardante HA - 35 / B / 10 / XC4 + XD3 con una cantidad de cemento de 350 kg/m3
i relación agua cemento =< 0.6, colocado manualmente en proyectos de edificación.'
- source_sentence: 'query: Paret divisòria per a interior de panell de fusta contralaminada
de 150 mm de gruix formada per 5 capes de fusta d''avet C24, encolades amb adhesiu
sense urea-formaldehíde amb la disposició transversal de la fusta en les dues
cares del panell, amb tractament hidròfug, amb acabat superficial tipus habitatge
en les dues cares amb fusta d''avet roig amb fusta de Làrix europeu envernissat
en l''altre en l''altre col·locat amb fixacions mecàniques, desolidarització del
suport amb banda resilient de cautxú EPDM extruït, fixada amb grapes; unió entre
panells encadellat fixats amb cargols d''acer i segellat de la cara interior dels
junts amb cinta adhesiva de goma butílica, amb armadura de polièster i segellat
de la cara exterior amb cinta autoadhessiva de polietilè amb adhesiu acrílic sense
dissolvents, amb armadura de polietilè i pel·lícula de separació de paper siliconat,
prèvia aplicació d''imprimació incolora a base d'' una dispersió acrílica sense
dissolvents; resolució de traves amb cargols d''acer; fixació de panells amb elements
d''acer galvanitzat'
sentences:
- 'passage: Banc de metall d''alumini anoditzat, de 150 cm de llarg, amb suports
de fosa d''alumini, instal·lat amb fixacions mecàniques, en un entorn urbà amb
dificultats d''accés, en voreres de fins a 3 m d''amplada o calçades/plataformes
úniques de fins a 7 m d''amplada, amb interferències per serveis o elements de
mobiliari urbà, en projectes de 1 a 5 unitats.'
- 'passage: Paret separadora per a interiors de panell de fusta contralaminada de
150 mm de gruix composta per 5 capes de fusta d''avet C24, encolades amb adhesiu
sense urea-formaldehíde, amb la disposició transversal de la fusta a les dues
cares del panell, tractada per a resistència a la humitat, amb acabat superficial
de tipus residencial en ambdues cares amb fusta d''avet roig i fusta de Làrix
europeu envernissat a l''altre costat, instal·lada amb fixacions mecàniques, desolidarització
del suport mitjançant banda resilient de cautxú EPDM extruït, fixada amb grapes;
unió entre panells encadenada amb cargols d''acer i segellat de la cara interior
dels junts amb cinta adhesiva de goma butílica, amb armadura de polièster i segellat
de la cara exterior amb cinta autoadhesiva de polietilè amb adhesiu acrílic sense
dissolvents, amb armadura de polietilè i pel·lícula de separació de paper siliconat,
després d''aplicar una imprimació incolora a base d''una dispersió acrílica sense
dissolvents; resolució de traves amb cargols d''acer; fixació de panells amb elements
d''acer galvanitzat.'
- 'passage: Paret divisòria per a exterior de panell de fusta massissa de 200 mm
de gruix formada per 3 capes de fusta de pi, encolades amb adhesiu amb urea-formaldehíde
amb la disposició longitudinal de la fusta en una sola cara del panell, sense
tractament hidròfug, amb acabat superficial tipus industrial en una sola cara
amb fusta de pi blanc, col·locat amb soldadura, unió entre panells amb encaix
fixats amb cargols de plàstic i segellat de la cara interior dels junts amb cinta
adhesiva de polipropilè, amb armadura de fibra de vidre i segellat de la cara
exterior amb cinta de tela amb adhesiu de cautxú, amb armadura de fibra de vidre
i pel·lícula de separació de plàstic, prèvia aplicació d''imprimació colorida
a base d''una dispersió aquosa; resolució de traves amb cargols de plàstic; fixació
de panells amb elements de plàstic reciclable.'
model-index:
- name: Multilingual E5 Large trained with triplet loss
results:
- task:
type: triplet
name: Triplet
dataset:
name: validation set
type: validation-set
metrics:
- type: cosine_accuracy
value: 0.8810198300283286
name: Cosine Accuracy
- type: dot_accuracy
value: 0.11898016997167139
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.8800755429650614
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.8810198300283286
name: Euclidean Accuracy
- type: max_accuracy
value: 0.8810198300283286
name: Max Accuracy
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: dot_accuracy
value: 0.0
name: Dot Accuracy
- type: manhattan_accuracy
value: 1.0
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 1.0
name: Euclidean Accuracy
- type: max_accuracy
value: 1.0
name: Max Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: test set
type: test-set
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: dot_accuracy
value: 0.0
name: Dot Accuracy
- type: manhattan_accuracy
value: 1.0
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 1.0
name: Euclidean Accuracy
- type: max_accuracy
value: 1.0
name: Max Accuracy
---
# Multilingual E5 Large trained with triplet loss
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision ab10c1a7f42e74530fe7ae5be82e6d4f11a719eb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
- **Language:** multilingual
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RamsesDIIP/me5-large-construction-esp-cat-v2")
# Run inference
sentences = [
"query: Paret divisòria per a interior de panell de fusta contralaminada de 150 mm de gruix formada per 5 capes de fusta d'avet C24, encolades amb adhesiu sense urea-formaldehíde amb la disposició transversal de la fusta en les dues cares del panell, amb tractament hidròfug, amb acabat superficial tipus habitatge en les dues cares amb fusta d'avet roig amb fusta de Làrix europeu envernissat en l'altre en l'altre col·locat amb fixacions mecàniques, desolidarització del suport amb banda resilient de cautxú EPDM extruït, fixada amb grapes; unió entre panells encadellat fixats amb cargols d'acer i segellat de la cara interior dels junts amb cinta adhesiva de goma butílica, amb armadura de polièster i segellat de la cara exterior amb cinta autoadhessiva de polietilè amb adhesiu acrílic sense dissolvents, amb armadura de polietilè i pel·lícula de separació de paper siliconat, prèvia aplicació d'imprimació incolora a base d' una dispersió acrílica sense dissolvents; resolució de traves amb cargols d'acer; fixació de panells amb elements d'acer galvanitzat",
"passage: Paret separadora per a interiors de panell de fusta contralaminada de 150 mm de gruix composta per 5 capes de fusta d'avet C24, encolades amb adhesiu sense urea-formaldehíde, amb la disposició transversal de la fusta a les dues cares del panell, tractada per a resistència a la humitat, amb acabat superficial de tipus residencial en ambdues cares amb fusta d'avet roig i fusta de Làrix europeu envernissat a l'altre costat, instal·lada amb fixacions mecàniques, desolidarització del suport mitjançant banda resilient de cautxú EPDM extruït, fixada amb grapes; unió entre panells encadenada amb cargols d'acer i segellat de la cara interior dels junts amb cinta adhesiva de goma butílica, amb armadura de polièster i segellat de la cara exterior amb cinta autoadhesiva de polietilè amb adhesiu acrílic sense dissolvents, amb armadura de polietilè i pel·lícula de separació de paper siliconat, després d'aplicar una imprimació incolora a base d'una dispersió acrílica sense dissolvents; resolució de traves amb cargols d'acer; fixació de panells amb elements d'acer galvanitzat.",
"passage: Paret divisòria per a exterior de panell de fusta massissa de 200 mm de gruix formada per 3 capes de fusta de pi, encolades amb adhesiu amb urea-formaldehíde amb la disposició longitudinal de la fusta en una sola cara del panell, sense tractament hidròfug, amb acabat superficial tipus industrial en una sola cara amb fusta de pi blanc, col·locat amb soldadura, unió entre panells amb encaix fixats amb cargols de plàstic i segellat de la cara interior dels junts amb cinta adhesiva de polipropilè, amb armadura de fibra de vidre i segellat de la cara exterior amb cinta de tela amb adhesiu de cautxú, amb armadura de fibra de vidre i pel·lícula de separació de plàstic, prèvia aplicació d'imprimació colorida a base d'una dispersió aquosa; resolució de traves amb cargols de plàstic; fixació de panells amb elements de plàstic reciclable.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `validation-set`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:----------|
| cosine_accuracy | 0.881 |
| dot_accuracy | 0.119 |
| manhattan_accuracy | 0.8801 |
| euclidean_accuracy | 0.881 |
| **max_accuracy** | **0.881** |
#### Triplet
* Dataset: `validation-set`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:--------|
| cosine_accuracy | 1.0 |
| dot_accuracy | 0.0 |
| manhattan_accuracy | 1.0 |
| euclidean_accuracy | 1.0 |
| **max_accuracy** | **1.0** |
#### Triplet
* Dataset: `test-set`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:--------|
| cosine_accuracy | 1.0 |
| dot_accuracy | 0.0 |
| manhattan_accuracy | 1.0 |
| euclidean_accuracy | 1.0 |
| **max_accuracy** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 10,590 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 111.15 tokens</li><li>max: 320 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 108.5 tokens</li><li>max: 316 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 101.87 tokens</li><li>max: 303 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Filtre per a tub de 65 mm de diàmetre nominal, de 25 bar de pressió màxima de servei, cilíndric, amb portafiltres de 6" de diàmetre i muntat entre tubs</code> | <code>passage: Filtre per a tub de 65 mm de diàmetre nominal, amb una pressió màxima de 25 bar, de forma cilíndrica, equipat amb un portafiltres de 6" de diàmetre i instal·lat entre les canonades.</code> | <code>passage: Filtre per a tub de 50 mm de diàmetre nominal, de 20 bar de pressió màxima de servei, rectangular, amb portafiltres de 4" de diàmetre i instal·lat en línia amb vàlvules.</code> |
| <code>query: Radiador de fosa de 24 elements amb 3 columnes, de 725 mm d'alçària màxima, per a aigua calenta de 6 bar i 110 °C, com a màxim i amb suport per a anar encastat, amb valvuleria termostabilitzable per a sistema monotubular i purgador automàtic</code> | <code>passage: Radiador de fosa de 24 elements amb 3 columnes, d'una alçària màxima de 725 mm, dissenyat per a aigua calenta a 6 bar i 110 °C, amb suport per a instal·lar encastat, incloent valvuleria termostabilitzable per a sistema monotubular i purgador automàtic.</code> | <code>passage: Radiador de fosa de 20 elements amb 4 columnes, de 600 mm d'alçària mínima, per a aigua freda de 4 bar i 80 °C, com a mínim i amb suport per a instal·lar a la paret, amb valvuleria manual per a sistema bitubular i purgador manual.</code> |
| <code>query: Tanca per a espais públics de perfils de fusta tractada tornejats de disposició horitzontal i vertical, de 975 a 1075 mm d'alçària amb estructura de muntants de fusta tractada, encastada al suport amb daus de formigó, en entorn urbà sense dificultat de mobilitat, en voreres > 3 i <= 5 m d'amplària o calçada/plataforma única > 7 i <= 12 m d'amplària, amb afectació per serveis o elements de mobiliari urbà, en actuacions de més de 100 m</code> | <code>passage: Tanca per a zones públiques amb perfils de fusta tractada, dissenyada tant en posició horitzontal com vertical, amb una alçària que varia entre 975 i 1075 mm, suportada per muntants de fusta tractada fixats a la base amb blocs de formigó, adequada per a entorns urbans amb mobilitat accessible, en voreres d'amplada entre 3 i 5 m o en calçades/plataformes d'amplada entre 7 i 12 m, tenint en compte la presència de serveis o mobiliari urbà, per a projectes que superin els 100 m.</code> | <code>passage: Tanca per a espais privats de perfils de metall galvanitzat amb disposició vertical i horitzontal, de 800 a 1000 mm d'alçària amb estructura de muntants de metall, fixada al sòl amb fonaments de formigó, en entorn rural amb dificultats de mobilitat, en camins > 2 i <= 4 m d'amplària o carreteres/plataformes > 5 i <= 10 m d'amplària, sense afectació per serveis o elements de mobiliari urbà, en actuacions de menys de 50 m.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### csv
* Dataset: csv
* Size: 10,590 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 111.13 tokens</li><li>max: 341 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 108.44 tokens</li><li>max: 310 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 102.0 tokens</li><li>max: 331 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Tubo de aluminio extrusionado de 76 mm de diámetro, para señales de tráfico, colocada, en entorno urbano sin dificultad de movilidad, en aceras <= 3 m de ancho o calzada/plataforma única <= 7 m de ancho, con afectación por servicios o elementos de mobiliario urbano, en actuaciones de más de100 m</code> | <code>passage: Tubo de aluminio extruido de 76 mm de diámetro, destinado a señales viales, instalado en áreas urbanas con fácil acceso, en aceras de hasta 3 m de ancho o en calzadas/plataformas de hasta 7 m de ancho, considerando la interferencia de servicios o mobiliario urbano, en proyectos que superen los 100 m.</code> | <code>passage: Tubo de acero galvanizado de 80 mm de diámetro, para estructuras temporales, instalado en área rural con restricciones de acceso, en caminos <= 4 m de ancho o terrenos de servicio <= 8 m de ancho, sin interferencia de servicios o elementos de infraestructura, en proyectos de menos de 50 m.</code> |
| <code>query: Pavimento de losa de hormigón para pavimentos de 40x40 cm y 8 cm de espesor, de forma cuadrado, textura pétrea, precio alto, sobre lecho de arena de 5 cm de espesor, con relleno de juntas con arena fina y compactación del pavimento acabado, en entorno urbano con dificultad de mobilidad, en aceras > 3 y <= 5 m de ancho o calzada/plataforma única > 7 y <= 12 m de ancho, con afectación por servicios o elementos de mobiliario urbano, en actuaciones de hasta 1 m2</code> | <code>passage: Losas de concreto de 40x40 cm y 8 cm de grosor, con acabado rugoso, instaladas sobre una base de arena de 5 cm, con juntas rellenadas con arena fina y compactación final, en áreas urbanas con acceso limitado, en aceras de entre 3 y 5 m de ancho o calzadas de 7 a 12 m de ancho, considerando la interferencia de servicios y mobiliario urbano, en proyectos de hasta 1 m2.</code> | <code>passage: Pavimento de losa de cerámica para pavimentos de 30x30 cm y 1 cm de espesor, de forma rectangular, textura lisa, precio bajo, sobre base de grava de 2 cm de espesor, con sellado de juntas con silicona y nivelación del pavimento terminado, en entorno rural con fácil acceso, en caminos > 1 y <= 3 m de ancho o senderos/plataformas individuales > 5 y <= 7 m de ancho, sin afectación por servicios o elementos de mobiliario rural, en actuaciones de hasta 2 m2.</code> |
| <code>query: Reducción concéntrica de polietileno PE 100, fabricación manipulada, diámetro nominal DN 225 - DN 160, presión nominal PN 10 (SDR 17), para unión por soldadura a tope, de acuerdo con la norma UNE-EN 12201-3 y colocado en el fondo de la zanja en entorno urbano sin dificultad de movilidad, sin afectación por presencia de servicios en la zanja, con presencia de entibación</code> | <code>passage: Reducción concéntrica de polietileno PE 100, fabricada a medida, con diámetros nominales DN 225 - DN 160, presión nominal PN 10 (SDR 17), diseñada para unión por soldadura a tope, cumpliendo con la norma UNE-EN 12201-3 y colocada en el fondo de la zanja en un entorno urbano, garantizando fácil acceso y sin interferencias por servicios existentes, con soporte de entibación.</code> | <code>passage: Reducción excéntrica de polietileno PE 80, fabricación estándar, diámetro nominal DN 200 - DN 125, presión nominal PN 6 (SDR 21), para unión por soldadura eléctrica, de acuerdo con la norma UNE-EN 12201-2 y colocado en la parte superior de la zanja en entorno rural con dificultad de movilidad, con afectación por presencia de servicios en la zanja, sin necesidad de entibación.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `learning_rate`: 4e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | validation-set_max_accuracy | test-set_max_accuracy |
|:----------:|:--------:|:-------------:|:---------------:|:---------------------------:|:---------------------:|
| 0 | 0 | - | - | 0.8810 | - |
| 0.1416 | 200 | 0.341 | 0.0055 | 1.0 | - |
| 0.2833 | 400 | 0.0079 | 0.0060 | 0.9991 | - |
| 0.4249 | 600 | 0.005 | 0.0020 | 1.0 | - |
| 0.5666 | 800 | 0.0013 | 0.0063 | 0.9981 | - |
| 0.7082 | 1000 | 0.0048 | 0.0044 | 0.9991 | - |
| 0.8499 | 1200 | 0.003 | 0.0048 | 1.0 | - |
| 0.9915 | 1400 | 0.013 | 0.0034 | 1.0 | - |
| 1.1331 | 1600 | 0.0135 | 0.0021 | 1.0 | - |
| 1.2748 | 1800 | 0.0063 | 0.0038 | 1.0 | - |
| 1.4164 | 2000 | 0.0066 | 0.0026 | 0.9991 | - |
| 1.5581 | 2200 | 0.0016 | 0.0027 | 0.9991 | - |
| 1.6997 | 2400 | 0.0013 | 0.0008 | 1.0 | - |
| 1.8414 | 2600 | 0.0034 | 0.0017 | 1.0 | - |
| 1.9830 | 2800 | 0.0045 | 0.0016 | 1.0 | - |
| 2.1246 | 3000 | 0.001 | 0.0004 | 1.0 | - |
| 2.2663 | 3200 | 0.0033 | 0.0009 | 1.0 | - |
| 2.4079 | 3400 | 0.0013 | 0.0019 | 0.9991 | - |
| 2.5496 | 3600 | 0.0009 | 0.0005 | 1.0 | - |
| 2.6912 | 3800 | 0.0023 | 0.0019 | 1.0 | - |
| 2.8329 | 4000 | 0.0012 | 0.0026 | 1.0 | - |
| 2.9745 | 4200 | 0.0003 | 0.0005 | 1.0 | - |
| 3.1161 | 4400 | 0.0021 | 0.0017 | 1.0 | - |
| 3.2578 | 4600 | 0.0012 | 0.0076 | 0.9991 | - |
| 3.3994 | 4800 | 0.0006 | 0.0004 | 1.0 | - |
| 3.5411 | 5000 | 0.0002 | 0.0010 | 1.0 | - |
| 3.6827 | 5200 | 0.0003 | 0.0010 | 1.0 | - |
| 3.8244 | 5400 | 0.0012 | 0.0013 | 1.0 | - |
| 3.9660 | 5600 | 0.0004 | 0.0011 | 1.0 | - |
| 4.1076 | 5800 | 0.0001 | 0.0029 | 1.0 | - |
| 4.2493 | 6000 | 0.0001 | 0.0016 | 1.0 | - |
| 4.3909 | 6200 | 0.0002 | 0.0042 | 1.0 | - |
| 4.5326 | 6400 | 0.0001 | 0.0033 | 1.0 | - |
| 4.6742 | 6600 | 0.0003 | 0.0015 | 1.0 | - |
| 4.8159 | 6800 | 0.0001 | 0.0009 | 1.0 | - |
| **4.9575** | **7000** | **0.0001** | **0.0002** | **1.0** | **-** |
| 5.0992 | 7200 | 1.6191 | 2.4829 | 0.6893 | - |
| 5.2408 | 7400 | 2.4855 | 2.4829 | 0.5921 | - |
| 5.3824 | 7600 | 2.484 | 2.4829 | 0.6308 | - |
| 5.5241 | 7800 | 2.4877 | 2.4829 | 0.5005 | - |
| 5.6657 | 8000 | 2.4875 | 2.4829 | 0.5137 | - |
| 5.8074 | 8200 | 2.479 | 2.4829 | 0.5024 | - |
| 5.9490 | 8400 | 2.5288 | 2.4165 | 0.6789 | - |
| 6.0907 | 8600 | 2.5154 | 2.4829 | 0.4476 | - |
| 6.2323 | 8800 | 2.4743 | 2.4829 | 0.4920 | - |
| 6.3739 | 9000 | 2.476 | 2.4829 | 0.4589 | - |
| 6.5156 | 9200 | 2.4589 | 2.4829 | 0.4259 | - |
| 6.6572 | 9400 | 2.4584 | 2.4829 | 0.4117 | - |
| 6.7989 | 9600 | 2.4546 | 2.4829 | 0.4287 | - |
| 6.9405 | 9800 | 2.4623 | 2.4829 | 0.4410 | - |
| 7.0822 | 10000 | 2.4525 | 2.4829 | 0.4429 | - |
| 7.2238 | 10200 | 2.4396 | 2.4829 | 0.4514 | - |
| 7.3654 | 10400 | 2.422 | 2.4829 | 0.4570 | - |
| 7.5071 | 10600 | 2.4302 | 2.4829 | 0.4287 | - |
| 7.6487 | 10800 | 2.3866 | 2.4829 | 0.4259 | - |
| 7.7904 | 11000 | 2.4037 | 2.4829 | 0.4353 | - |
| 7.9320 | 11200 | 2.3875 | 2.4829 | 0.4315 | - |
| 8.0737 | 11400 | 2.3916 | 2.4829 | 0.4212 | - |
| 8.2153 | 11600 | 2.3829 | 2.4829 | 0.4079 | - |
| 8.3569 | 11800 | 2.3571 | 2.4829 | 0.4306 | - |
| 8.4986 | 12000 | 2.3312 | 2.4829 | 0.4297 | - |
| 8.6402 | 12200 | 2.3204 | 2.4829 | 0.4032 | - |
| 8.7819 | 12400 | 2.3304 | 2.4829 | 0.4268 | - |
| 8.9235 | 12600 | 2.3034 | 2.4829 | 0.4259 | - |
| 9.0652 | 12800 | 2.2823 | 2.4829 | 0.4070 | - |
| 9.2068 | 13000 | 2.2553 | 2.4829 | 0.4278 | - |
| 9.3484 | 13200 | 2.2879 | 2.4829 | 0.4372 | - |
| 9.4901 | 13400 | 2.2382 | 2.4829 | 0.4278 | - |
| 9.6317 | 13600 | 2.2388 | 2.4829 | 0.4466 | - |
| 9.7734 | 13800 | 2.2279 | 2.4829 | 0.4410 | - |
| 9.9150 | 14000 | 2.2446 | 2.4829 | 0.4551 | - |
| 10.0 | 14120 | - | - | 1.0 | 1.0 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/h2o-danube3-500m-chat-GGUF | mradermacher | 2024-10-21T10:57:09Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"base_model:h2oai/h2o-danube3-500m-chat",
"base_model:quantized:h2oai/h2o-danube3-500m-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T10:25:54Z | ---
base_model: h2oai/h2o-danube3-500m-chat
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/h2oai/h2o-danube3-500m-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/h2o-danube3-500m-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-chat-GGUF/resolve/main/h2o-danube3-500m-chat.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sec-classification/secroberta_ft | sec-classification | 2024-10-21T10:50:25Z | 39 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T10:50:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/h2o-danube3-500m-base-GGUF | mradermacher | 2024-10-21T10:49:26Z | 122 | 0 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:quantized:h2oai/h2o-danube3-500m-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T10:24:26Z | ---
base_model: h2oai/h2o-danube3-500m-base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/h2oai/h2o-danube3-500m-base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF/resolve/main/h2o-danube3-500m-base.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/h2o-danube3-500m-base-i1-GGUF | mradermacher | 2024-10-21T10:49:26Z | 83 | 0 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:quantized:h2oai/h2o-danube3-500m-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-10-21T10:43:52Z | ---
base_model: h2oai/h2o-danube3-500m-base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/h2oai/h2o-danube3-500m-base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/h2o-danube3-500m-base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 0.4 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 0.4 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 0.4 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q4_0.gguf) | i1-Q4_0 | 0.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/h2o-danube3-500m-base-i1-GGUF/resolve/main/h2o-danube3-500m-base.i1-Q6_K.gguf) | i1-Q6_K | 0.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pramudyalyza/bert-indonesian-finetuned-news-v2 | pramudyalyza | 2024-10-21T10:43:15Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa",
"base_model:finetune:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-21T10:42:44Z | ---
base_model: ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
library_name: transformers
license: mit
metrics:
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-indonesian-finetuned-news-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-indonesian-finetuned-news-v2
This model is a fine-tuned version of [ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa](https://huggingface.co/ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3352
- F1: 0.8548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2467 | 1.0 | 1000 | 0.3352 | 0.8548 |
| 0.3142 | 2.0 | 2000 | 0.3523 | 0.8576 |
| 0.2848 | 3.0 | 3000 | 0.3878 | 0.8584 |
| 0.2481 | 4.0 | 4000 | 0.4034 | 0.8676 |
| 0.2225 | 5.0 | 5000 | 0.4437 | 0.8705 |
| 0.2038 | 6.0 | 6000 | 0.4584 | 0.8706 |
| 0.1927 | 7.0 | 7000 | 0.4541 | 0.8710 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mervinpraison/Florence-2-FT-DocVQA | mervinpraison | 2024-10-21T10:42:48Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-01T17:26:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Gemma2-Eclipse-9B-i1-GGUF | mradermacher | 2024-10-21T10:41:52Z | 126 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/Gemma2-Eclipse-9B",
"base_model:quantized:mpasila/Gemma2-Eclipse-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-21T10:20:10Z | ---
base_model: mpasila/Gemma2-Eclipse-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mpasila/Gemma2-Eclipse-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma2-Eclipse-9B-i1-GGUF/resolve/main/Gemma2-Eclipse-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Ariffiq99/Bert_Stacked_model_40 | Ariffiq99 | 2024-10-21T10:38:20Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-10-21T09:36:26Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bert_Stacked_model_40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_Stacked_model_40
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
- F1: 0.5415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2577 | 1.0 | 631 | 1.2060 | 0.5057 |
| 1.2168 | 2.0 | 1262 | 1.1779 | 0.5234 |
| 1.1862 | 3.0 | 1893 | 1.1617 | 0.5330 |
| 1.1409 | 4.0 | 2524 | 1.1500 | 0.5404 |
| 1.1227 | 5.0 | 3155 | 1.1520 | 0.5415 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf | RichardErkhov | 2024-10-21T10:30:21Z | 113 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T10:05:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLaMa-3.2-Instruct-JankMixBread-v0.1-3B - GGUF
- Model creator: https://huggingface.co/PJMixers-Dev/
- Original model: https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q2_K.gguf) | Q2_K | 1.27GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K.gguf) | Q3_K | 1.57GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_0.gguf) | Q4_0 | 1.79GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_K.gguf) | Q4_K | 1.88GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q4_1.gguf) | Q4_1 | 1.95GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_0.gguf) | Q5_0 | 2.11GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_K.gguf) | Q5_K | 2.16GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q5_1.gguf) | Q5_1 | 2.28GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q6_K.gguf) | Q6_K | 2.46GB |
| [LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/PJMixers-Dev_-_LLaMa-3.2-Instruct-JankMixBread-v0.1-3B-gguf/blob/main/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
license: llama3.2
library_name: transformers
tags:
- mergekit
- merge
base_model:
- meta-llama/Llama-3.2-3B
- meta-llama/Llama-3.2-3B-Instruct
- PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
model-index:
- name: LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 50.41
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 22.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.73
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.36
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.68
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.15
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
name: Open LLM Leaderboard
---
# LLaMa-3.2-Instruct-JankMixBread-v0.1-3B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the breadcrumbs_ties merge method using [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) as a base.
### Models Merged
The following models were included in the merge:
* [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
* [PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: breadcrumbs_ties
base_model: meta-llama/Llama-3.2-3B
tokenizer_source: PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
dtype: bfloat16
parameters:
normalize: true
models:
- model: meta-llama/Llama-3.2-3B-Instruct
parameters:
weight: 1
density: 0.9
gamma: 0.01
- model: PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B
parameters:
weight: 1
density: 0.9
gamma: 0.01
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PJMixers-Dev__LLaMa-3.2-Instruct-JankMixBread-v0.1-3B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.35|
|IFEval (0-Shot) |50.41|
|BBH (3-Shot) |22.76|
|MATH Lvl 5 (4-Shot)|10.73|
|GPQA (0-shot) | 4.36|
|MuSR (0-shot) | 4.68|
|MMLU-PRO (5-shot) |23.15|
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
Niha14/speecht5_finetuned_techincal_data | Niha14 | 2024-10-21T10:29:07Z | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-21T05:58:57Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_techincal_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_techincal_data
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5557 | 5.7143 | 100 | 0.5029 |
| 0.4863 | 11.4286 | 200 | 0.5253 |
| 0.4633 | 17.1429 | 300 | 0.5023 |
| 0.4395 | 22.8571 | 400 | 0.5260 |
| 0.4272 | 28.5714 | 500 | 0.5278 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf | RichardErkhov | 2024-10-21T10:27:07Z | 18 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T10:00:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only - GGUF
- Model creator: https://huggingface.co/BunnyBosz/
- Original model: https://huggingface.co/BunnyBosz/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q2_K.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_0.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_K.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_1.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_0.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_K.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_1.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q6_K.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q8_0.gguf](https://huggingface.co/RichardErkhov/BunnyBosz_-_llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only-gguf/blob/main/llama-3.2-3b-fine-tuned-model-Mut-effect-pred-v3-abstracts-only.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
ofirab/visfocus-base-docvqa | ofirab | 2024-10-21T10:26:36Z | 5 | 0 | null | [
"safetensors",
"visfocus",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2024-10-13T07:08:41Z | ---
license: apache-2.0
---
|
malduwais/xlm-roberta-base-finetuned-ANAD-mlm-0.15-base-19OCT | malduwais | 2024-10-21T10:20:52Z | 5 | 0 | null | [
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-10-21T10:20:24Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-ANAD-mlm-0.15-base-19OCT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ANAD-mlm-0.15-base-19OCT
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.0052 | 0.9999 | 1063 | 1.8141 |
| 1.9038 | 1.9999 | 2126 | 1.7297 |
| 1.8271 | 2.9998 | 3189 | 1.6728 |
| 1.7734 | 3.9998 | 4252 | 1.6329 |
| 1.728 | 4.9997 | 5315 | 1.6055 |
| 1.6905 | 5.9996 | 6378 | 1.5794 |
| 1.6624 | 6.9996 | 7441 | 1.5558 |
| 1.6257 | 7.9995 | 8504 | 1.5368 |
| 1.6127 | 8.9995 | 9567 | 1.5279 |
| 1.593 | 9.9994 | 10630 | 1.5208 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.1.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
paraschopra/llama-31-8b-instruct-regenerated-100-cot-long-system-prompt | paraschopra | 2024-10-21T10:19:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T10:16:23Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** paraschopra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
interview-eval/zephyr-7b-math-case-8-fixed | interview-eval | 2024-10-21T10:16:27Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:EunsuKim/GSM8K",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T10:00:32Z | ---
library_name: transformers
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- EunsuKim/GSM8K
model-index:
- name: zephyr-7b-math-case-8-fixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-math-case-8-fixed
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the EunsuKim/GSM8K dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0201 | 1.0 | 4 | 0.8702 |
| 0.9576 | 2.0 | 8 | 0.6513 |
| 0.7027 | 3.0 | 12 | 0.4668 |
| 0.4836 | 4.0 | 16 | 0.2752 |
| 0.2544 | 5.0 | 20 | 0.1434 |
| 0.2544 | 6.0 | 24 | 0.0726 |
| 0.109 | 7.0 | 28 | 0.0441 |
| 0.0544 | 8.0 | 32 | 0.0341 |
| 0.0364 | 9.0 | 36 | 0.0290 |
| 0.0289 | 10.0 | 40 | 0.0279 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf | RichardErkhov | 2024-10-21T10:03:18Z | 105 | 0 | null | [
"gguf",
"arxiv:2401.01335",
"arxiv:2305.16958",
"arxiv:2310.05914",
"arxiv:2306.17492",
"arxiv:2103.06583",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T09:33:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Memphis-CoT-3B - GGUF
- Model creator: https://huggingface.co/euclaise/
- Original model: https://huggingface.co/euclaise/Memphis-CoT-3B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Memphis-CoT-3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q2_K.gguf) | Q2_K | 1.01GB |
| [Memphis-CoT-3B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.IQ3_XS.gguf) | IQ3_XS | 1.11GB |
| [Memphis-CoT-3B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.IQ3_S.gguf) | IQ3_S | 1.17GB |
| [Memphis-CoT-3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q3_K_S.gguf) | Q3_K_S | 1.17GB |
| [Memphis-CoT-3B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.IQ3_M.gguf) | IQ3_M | 1.23GB |
| [Memphis-CoT-3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q3_K.gguf) | Q3_K | 1.3GB |
| [Memphis-CoT-3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q3_K_M.gguf) | Q3_K_M | 1.3GB |
| [Memphis-CoT-3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q3_K_L.gguf) | Q3_K_L | 1.4GB |
| [Memphis-CoT-3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [Memphis-CoT-3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q4_0.gguf) | Q4_0 | 1.5GB |
| [Memphis-CoT-3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.IQ4_NL.gguf) | IQ4_NL | 1.51GB |
| [Memphis-CoT-3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q4_K_S.gguf) | Q4_K_S | 1.51GB |
| [Memphis-CoT-3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q4_K.gguf) | Q4_K | 1.59GB |
| [Memphis-CoT-3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
| [Memphis-CoT-3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q4_1.gguf) | Q4_1 | 1.65GB |
| [Memphis-CoT-3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q5_0.gguf) | Q5_0 | 1.81GB |
| [Memphis-CoT-3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q5_K_S.gguf) | Q5_K_S | 1.81GB |
| [Memphis-CoT-3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q5_K.gguf) | Q5_K | 1.86GB |
| [Memphis-CoT-3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q5_K_M.gguf) | Q5_K_M | 1.86GB |
| [Memphis-CoT-3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q5_1.gguf) | Q5_1 | 1.96GB |
| [Memphis-CoT-3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q6_K.gguf) | Q6_K | 2.14GB |
| [Memphis-CoT-3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/euclaise_-_Memphis-CoT-3B-gguf/blob/main/Memphis-CoT-3B.Q8_0.gguf) | Q8_0 | 2.77GB |
Original model description:
---
license: cc-by-sa-3.0
library_name: transformers
tags:
- supertrainer2000
- human-data
datasets:
- euclaise/TinyCoT
- euclaise/reddit-instruct
- sablo/oasst2_curated
- euclaise/SciCoT
metrics:
- accuracy
base_model: stabilityai/stablelm-3b-4e1t
---
*Now with a training bug fixed!*

Memphis-CoT is a finetune of [StableLM 3b 4e1t](stabilityai/stablelm-3b-4e1t) on [TinyCoT](https://huggingface.co/datasets/euclaise/TinyCoT), [SciCoT](https://huggingface.co/datasets/euclaise/SciCoT), along with [reddit-instruct](https://huggingface.co/datasets/euclaise/reddit-instruct) (subset to 5000 examples, excluding posts with brackets in the title) and a [curated](https://huggingface.co/datasets/sablo/oasst2_curated) subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2).
**Memphis was trained *only* on human data! No GPT generations here.**
Finetuning was performed using my [supertrainer2000](https://github.com/euclaise/supertrainer2000) framework, using my Adalite optimizer.
## Training Procedure
I finetuned the model using an iterative rationale-bootstrapping procedure inspired by [STaR](https://research.google/pubs/star-self-taught-reasoner-bootstrapping-reasoning-with-reasoning/) and [SPIN](https://arxiv.org/abs/2401.01335)
First, I finetuned the model on all the datasets using a [MixCE](https://arxiv.org/abs/2305.16958) loss and [NEFTune](https://arxiv.org/abs/2310.05914), for 2 epochs.
I then performed the following steps 3 times:
1. Generate responses for each question in TinyCoT using the current model, check each response for correctness, and create a dataset of (correct, incorrect) pairs. Extra values are discarded, such that each correct and incorrect response is unique.
2. Finetune the model for 1 epoch using a ranking loss over length-normalized log-probabilities of each sequence, similar to [Preference Ranking Optimization](https://arxiv.org/abs/2306.17492), comparing the correct vs incorrect generated response. Additionally, a standard CE loss over the chosen completion was included.
This should be more efficient than either STaR or SPIN, as it uses a ranking loss rather than rejection sampling (unlike STaR), and verifies correctness instead of assuming all model responses are incorrect (unlike SPIN).
To prevent excessive drift, I kept the model weights as a moving average: After each generate+train cycle, I interpolated between the previous model weights and the updated weights using spherical linear interpolation (SLERP), with an interpolation factor of 0.99.
## Prompt formats
The format for reddit-instruct and oasst2 was:
```
### User:
[insert instruction here]
### Assistant:
[insert response here]
### User:
...
```
The format for TinyCoT was:
```
### User:
[insert instruction here]
### Rationale:
[insert reasoning here]
### Answer:
[insert direct answer here]
```
## Benchmarks
| Model | Size | Data | Method | GSM8K (5-shot) | AGIEval (English/Nous subset, acc_norm) | BIG Bench Hard (CoT, few-shot*) |
|:-----------------------------------------------------------------------|--------|:--------------------|---------------|:---------------|:----------------------------------------|:------------------------------ |
| [StableLM 3B Base](https://hf.co/stabilityai/stablelm-3b-4e1t) | 3B | Base | Base | 2.05% | 25.14% | 36.75% |
| [StableHermes 3B](https://hf.co/cxllin/StableHermes-3b) | 3B | GPT | SFT | 3.64% | 24.31% | **37.28%** |
| [MPT 7B Instruct](https://hf.co/mosaicml/mpt-7b-instruct) | **7B** | **Human**+Anthropic | SFT | 2.05% | 24.12% | 11.01% |
| [OpenLLaMA 7B v2 open-instruct](http://hf.co/VMware/open-llama-7b-v2-open-instruct) | **7B** | **Human** (nearly: ecqa is an exception) | SFT | 8.64% | 23.21% | 29.84% |
| [StableLM Zephyr 3B](https://hf.co/stabilityai/stablelm-zephyr-3b) | 3B | GPT | DPO | possibly contaminated (45.72%) | **33.31%** | 0.91% |
| [LIMA LLaMA 2 7B](https://huggingface.co/heegyu/LIMA2-7b-hf) | **7B** | **Human** | SFT | 4.55% | 24.55% | 36.29% |
| [**Memphis-CoT 3B**](https://hf.co/euclaise/Memphis-CoT-3B) | 3B | **Human** | Self-teaching | **18.8%** | *27.22%* | *36.92%* |
*5-shot, as performed automatically by LM Evaluation Harness bbh_cot_fewshot even with num_fewshot=0
Memphis outperforms other primarily-human-data models that are over twice its size, along with SFT models of its size, and trades with the Zephyr DPO model. That said, Zephyr uses synthetic data, and *much* more of it.
Note that BBH results have wide SEs, sometimes even exceeding 16%.
It is unclear why Zephyr performs so poorly on BBH. Perhaps it is overfit, or maybe there was an issue with vllm.
Notes:
- Evaluations were performed using the `agieval` branch of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) (commit `0bef5c9c273b1c2f68e6018d4bb9c32b9aaff298`), using the `vllm` model.
- I tried to find human-data-trained StableLM models, but couldn't find any. I did find a few OpenLLaMA models, but they wouldn't load with LM Eval Harness and vllm. (I believe this can be fixed by changing the xformers backend, but I'm too lazy for that)
- OpenLLaMA 7B v2 open-instruct is a particularly relevant comparison, as it was trained on a *very* similar dataset.
## Hyperparameters
For the initial supervised finetuning step:
- Adalite optimizer, default hyperparameters of supertrainer2000 unless otherwise specified
- Lambda (Adalite's analogue to weight decay, see [here](https://arxiv.org/abs/2103.06583) for details) of 0.01
- LR of 1e-5
- MixCE ratio of 0.75
- Sequence length of 4096
- Cosine decay with a 20% warmup
- Frozen embeddings
- No training on inputs
- Accumulated batch size of 128
- NEFTune with an alpha of 10
For the generations:
- Generated using the current git version of `vllm`
- N=8
- Temperature of 0.5
- `top_p` of 0.8
- Maximum of 512 generated tokens, discarding responses that do not have a valid rationale and answer
For the rank finetuning:
- Adalite optimizer, default hyperparameters of supertrainer2000 unless otherwise specified
- Lambda of 0.01
- LR of 5e-7
- Rank loss weight of 0.25
- Sequence length of 1024
- Cosine schedule with 10% warmup
- Frozen embeddings
- No training on inputs
- Accumulated batch size of 128
- NEFTune with an alpha of 10
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
mradermacher/Lama3.1-8B-Saver-16B-GGUF | mradermacher | 2024-10-21T10:02:06Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:RsGoksel/Lama3.1-8B-Saver-16B",
"base_model:quantized:RsGoksel/Lama3.1-8B-Saver-16B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T08:31:07Z | ---
base_model: RsGoksel/Lama3.1-8B-Saver-16B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RsGoksel/Lama3.1-8B-Saver-16B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lama3.1-8B-Saver-16B-GGUF/resolve/main/Lama3.1-8B-Saver-16B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Huishan/SFVE-large | Huishan | 2024-10-21T09:49:13Z | 65 | 1 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2024-10-17T08:36:44Z | This is the official repository (still under construction) for paper "Towards Flexible Evaluation for Generative Visual Question Answering", an oral in ACM Multimedia 2024.
Please refer to our github repository (https://github.com/jihuishan/flexible_evaluation_for_vqa_mm24/tree/main) for more details.
Welcome any questions to [email protected]. |
Rafay17/Llama3.2_1b_customModle2 | Rafay17 | 2024-10-21T09:45:57Z | 20 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-13T15:22:26Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Rafay17
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf | RichardErkhov | 2024-10-21T09:44:08Z | 46 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T09:16:17Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-it-Ecommerce-ChatBot - GGUF
- Model creator: https://huggingface.co/tyseng/
- Original model: https://huggingface.co/tyseng/llama-3.2-3b-it-Ecommerce-ChatBot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/tyseng_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf | RichardErkhov | 2024-10-21T09:41:37Z | 121 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T09:11:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mol-Llama-3.2-3B-Instruct-Uncensored-v2 - GGUF
- Model creator: https://huggingface.co/wesley7137/
- Original model: https://huggingface.co/wesley7137/Mol-Llama-3.2-3B-Instruct-Uncensored-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q2_K.gguf) | Q2_K | 1.27GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K.gguf) | Q3_K | 1.57GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_K.gguf) | Q4_K | 1.88GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_K.gguf) | Q5_K | 2.16GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q6_K.gguf) | Q6_K | 2.46GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-v2-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored-v2.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
interview-eval/zephyr-7b-stem-case-2 | interview-eval | 2024-10-21T09:41:23Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:scottsuk0306/DepthQA",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T09:32:00Z | ---
library_name: transformers
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- scottsuk0306/DepthQA
model-index:
- name: zephyr-7b-stem-case-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-stem-case-2
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the scottsuk0306/DepthQA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0974 | 1.0 | 1 | 1.1027 |
| 1.0974 | 2.0 | 2 | 1.2132 |
| 1.0974 | 3.0 | 3 | 0.9654 |
| 1.0974 | 4.0 | 4 | 0.6956 |
| 0.9948 | 5.0 | 5 | 0.5648 |
| 0.9948 | 6.0 | 6 | 0.3997 |
| 0.9948 | 7.0 | 7 | 0.3115 |
| 0.9948 | 8.0 | 8 | 0.2529 |
| 0.9948 | 9.0 | 9 | 0.2212 |
| 0.3483 | 10.0 | 10 | 0.2113 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xerror/XTTS-v2_PeterJarvis | Xerror | 2024-10-21T09:36:48Z | 5 | 0 | coqui | [
"coqui",
"text-to-speech",
"license:other",
"region:us"
] | text-to-speech | 2024-10-14T09:56:24Z | ---
license: other
license_name: coqui-public-model-license
license_link: https://coqui.ai/cpml
library_name: coqui
pipeline_tag: text-to-speech
widget:
- text: "Once when I was six years old I saw a magnificent picture"
---
# ⓍTTS_v2 - Peter Drury Fine-Tuned Model
This repository hosts a fine-tuned version of the ⓍTTS model, utilizing 2.3 minutes of unique voice lines from Peter Drury, The voice lines were sourced from he's podcast with JOE on youtube, can be found here:
[Peter Drury RANKS His Best Commentary Moments & Reveals Commentary Secrets! MESSI WIN WORLD CUP!](https://www.youtube.com/watch?v=ibT6PINpyaw&t)

Listen to a sample of the ⓍTTS_v2 - Peter Drury Fine-Tuned Model:
<audio controls>
<source src="https://huggingface.co/kodoqmc/XTTS-v2_PeterDrury/resolve/main/fromtts.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
Here's a Peter Drury mp3 voice line clip from the training data:
<audio controls>
<source src="https://huggingface.co/kodoqmc/XTTS-v2_PeterDrury/resolve/main/reference.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
## Features
- 🎙️ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
- 🌍 **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining Peter Drury's voice.
- 😃 **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
- 🔄 **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
- 🎧 **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
## Supported Languages
The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
## Usage in Roll Cage
🤖💬 Boost your AI experience with this Ollama add-on! Enjoy real-time audio 🎙️ and text 🔍 chats, LaTeX rendering 📜, agent automations ⚙️, workflows 🔄, text-to-image 📝➡️🖼️, image-to-text 🖼️➡️🔤, image-to-video 🖼️➡️🎥 transformations. Fine-tune text 📝, voice 🗣️, and image 🖼️ gens. Includes Windows macro controls 🖥️ and DuckDuckGo search.
[ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
## CoquiTTS and Resources
- 🐸💬 **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
- 📚 **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
- 👩💻 **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
- 🗯 **Community**: [Discord](https://discord.gg/5eXr5seRrv)
## License
This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
## Contact
Join our 🐸Community on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
Using 🐸TTS API:
```python
from TTS.api import TTS
tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
# generate speech by cloning a voice using default settings
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
file_path="output.wav",
speaker_wav="/path/to/target/speaker.wav",
language="en")
```
Using 🐸TTS Command line:
```console
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
--text "Bugün okula gitmek istemiyorum." \
--speaker_wav /path/to/target/speaker.wav \
--language_idx tr \
--use_cuda true
```
Using the model directly:
```python
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
model.cuda()
outputs = model.synthesize(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
config,
speaker_wav="/data/TTS-public/_refclips/3.wav",
gpt_cond_len=3,
language="en",
)
```
|
Ariffiq99/Bert_Stacked_model_20 | Ariffiq99 | 2024-10-21T09:36:25Z | 102 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-10-21T09:02:05Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bert_Stacked_model_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_Stacked_model_20
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2506
- F1: 0.4885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 316 | 1.3159 | 0.4399 |
| 1.3299 | 2.0 | 632 | 1.2851 | 0.4765 |
| 1.3299 | 3.0 | 948 | 1.2651 | 0.4816 |
| 1.2625 | 4.0 | 1264 | 1.2576 | 0.4874 |
| 1.2236 | 5.0 | 1580 | 1.2506 | 0.4885 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
MatteoVan/layoutlmv3-test | MatteoVan | 2024-10-21T09:33:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T00:16:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
interview-eval/zephyr-7b-stem-case-1 | interview-eval | 2024-10-21T09:30:40Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:scottsuk0306/DepthQA",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T09:18:30Z | ---
library_name: transformers
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- scottsuk0306/DepthQA
model-index:
- name: zephyr-7b-stem-case-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-stem-case-1
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the scottsuk0306/DepthQA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1144 | 1.0 | 2 | 1.2928 |
| 1.1144 | 2.0 | 4 | 0.8673 |
| 1.0902 | 3.0 | 6 | 0.6944 |
| 1.0902 | 4.0 | 8 | 0.4531 |
| 0.5758 | 5.0 | 10 | 0.3128 |
| 0.5758 | 6.0 | 12 | 0.1866 |
| 0.5758 | 7.0 | 14 | 0.1056 |
| 0.1998 | 8.0 | 16 | 0.0681 |
| 0.1998 | 9.0 | 18 | 0.0533 |
| 0.0645 | 10.0 | 20 | 0.0500 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
phoner45/wangchan-sentiment-thai-text-model | phoner45 | 2024-10-21T09:22:40Z | 47 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"th",
"dataset:Wongnai/wongnai_reviews",
"dataset:pythainlp/wisesight_sentiment",
"base_model:airesearch/wangchanberta-base-att-spm-uncased",
"base_model:finetune:airesearch/wangchanberta-base-att-spm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-10T03:50:00Z | ---
library_name: transformers
base_model: airesearch/wangchanberta-base-att-spm-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wangchan-sentiment-thai-text-model
results: []
datasets:
- Wongnai/wongnai_reviews
- pythainlp/wisesight_sentiment
language:
- th
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchan-sentiment-thai-text-model
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5849
- Accuracy: 0.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6445 | 1.0 | 3822 | 0.6295 | 0.7309 |
| 0.5865 | 2.0 | 7644 | 0.5855 | 0.7430 |
| 0.5285 | 3.0 | 11466 | 0.5754 | 0.7455 |
| 0.5127 | 4.0 | 15288 | 0.5816 | 0.7492 |
| 0.4861 | 5.0 | 19110 | 0.5849 | 0.7535 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1 |
RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf | RichardErkhov | 2024-10-21T09:18:11Z | 16 | 0 | null | [
"gguf",
"arxiv:2306.01708",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T08:52:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
FineTome-v1.5-Llama3.2-3B-1007 - GGUF
- Model creator: https://huggingface.co/NotASI/
- Original model: https://huggingface.co/NotASI/FineTome-v1.5-Llama3.2-3B-1007/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [FineTome-v1.5-Llama3.2-3B-1007.Q2_K.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q2_K.gguf) | Q2_K | 1.27GB |
| [FineTome-v1.5-Llama3.2-3B-1007.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [FineTome-v1.5-Llama3.2-3B-1007.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [FineTome-v1.5-Llama3.2-3B-1007.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q3_K.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q3_K.gguf) | Q3_K | 1.57GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [FineTome-v1.5-Llama3.2-3B-1007.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q4_0.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q4_0.gguf) | Q4_0 | 1.79GB |
| [FineTome-v1.5-Llama3.2-3B-1007.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q4_K.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q4_K.gguf) | Q4_K | 1.88GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q4_1.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q4_1.gguf) | Q4_1 | 1.95GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q5_0.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q5_0.gguf) | Q5_0 | 2.11GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q5_K.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q5_K.gguf) | Q5_K | 2.16GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q5_1.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q5_1.gguf) | Q5_1 | 2.28GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q6_K.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q6_K.gguf) | Q6_K | 2.46GB |
| [FineTome-v1.5-Llama3.2-3B-1007.Q8_0.gguf](https://huggingface.co/RichardErkhov/NotASI_-_FineTome-v1.5-Llama3.2-3B-1007-gguf/blob/main/FineTome-v1.5-Llama3.2-3B-1007.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
license: llama3.2
library_name: transformers
tags:
- mergekit
- merge
base_model:
- unsloth/Llama-3.2-3B-Instruct
- NotASI/FineTome-Llama3.2-3B-1002
model-index:
- name: FineTome-v1.5-Llama3.2-3B-1007
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.08
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-v1.5-Llama3.2-3B-1007
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 19.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-v1.5-Llama3.2-3B-1007
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.14
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-v1.5-Llama3.2-3B-1007
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.57
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-v1.5-Llama3.2-3B-1007
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.07
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-v1.5-Llama3.2-3B-1007
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 16.09
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=NotASI/FineTome-v1.5-Llama3.2-3B-1007
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [NotASI/FineTome-Llama3.2-3B-1002](https://huggingface.co/NotASI/FineTome-Llama3.2-3B-1002)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NotASI/FineTome-Llama3.2-3B-1002
parameters:
weight: 1
- model: unsloth/Llama-3.2-3B-Instruct
parameters:
weight: 1
merge_method: ties
base_model: unsloth/Llama-3.2-3B-Instruct
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NotASI__FineTome-v1.5-Llama3.2-3B-1007)
| Metric |Value|
|-------------------|----:|
|Avg. |16.90|
|IFEval (0-Shot) |55.08|
|BBH (3-Shot) |19.46|
|MATH Lvl 5 (4-Shot)| 5.14|
|GPQA (0-shot) | 1.57|
|MuSR (0-shot) | 4.07|
|MMLU-PRO (5-shot) |16.09|
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
anhdang000/Florence-2-base-ChartGemma | anhdang000 | 2024-10-21T09:08:10Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-10-20T04:10:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf | RichardErkhov | 2024-10-21T09:03:37Z | 39 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T08:40:17Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
opencsg-starcoder2-3b-v0.1 - GGUF
- Model creator: https://huggingface.co/opencsg/
- Original model: https://huggingface.co/opencsg/opencsg-starcoder2-3b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [opencsg-starcoder2-3b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q2_K.gguf) | Q2_K | 1.19GB |
| [opencsg-starcoder2-3b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.IQ3_XS.gguf) | IQ3_XS | 1.28GB |
| [opencsg-starcoder2-3b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.IQ3_S.gguf) | IQ3_S | 1.34GB |
| [opencsg-starcoder2-3b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q3_K_S.gguf) | Q3_K_S | 1.33GB |
| [opencsg-starcoder2-3b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.IQ3_M.gguf) | IQ3_M | 1.39GB |
| [opencsg-starcoder2-3b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q3_K.gguf) | Q3_K | 1.52GB |
| [opencsg-starcoder2-3b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q3_K_M.gguf) | Q3_K_M | 1.52GB |
| [opencsg-starcoder2-3b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q3_K_L.gguf) | Q3_K_L | 1.68GB |
| [opencsg-starcoder2-3b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.64GB |
| [opencsg-starcoder2-3b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q4_0.gguf) | Q4_0 | 1.71GB |
| [opencsg-starcoder2-3b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.IQ4_NL.gguf) | IQ4_NL | 1.72GB |
| [opencsg-starcoder2-3b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.72GB |
| [opencsg-starcoder2-3b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q4_K.gguf) | Q4_K | 1.84GB |
| [opencsg-starcoder2-3b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.84GB |
| [opencsg-starcoder2-3b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q4_1.gguf) | Q4_1 | 1.88GB |
| [opencsg-starcoder2-3b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q5_0.gguf) | Q5_0 | 2.06GB |
| [opencsg-starcoder2-3b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q5_K_S.gguf) | Q5_K_S | 2.06GB |
| [opencsg-starcoder2-3b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q5_K.gguf) | Q5_K | 2.13GB |
| [opencsg-starcoder2-3b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q5_K_M.gguf) | Q5_K_M | 2.13GB |
| [opencsg-starcoder2-3b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q5_1.gguf) | Q5_1 | 2.24GB |
| [opencsg-starcoder2-3b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q6_K.gguf) | Q6_K | 2.44GB |
| [opencsg-starcoder2-3b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-starcoder2-3b-v0.1-gguf/blob/main/opencsg-starcoder2-3b-v0.1.Q8_0.gguf) | Q8_0 | 3.15GB |
Original model description:
---
license: bigcode-openrail-m
datasets:
- bigcode/starcoderdata
- bigcode/the-stack-dedup
metrics:
- code_eval
library_name: transformers
pipeline_tag: text-generation
tags:
- code
---
# **Opencsg-starcoder2-3b-v0.1** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
The [StarCoder](https://huggingface.co/bigcode/starcoder) models are 15.5B parameter models trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded.
Based on StarCoder2, opencsg-starcoder2-3b-v0.1 was fintuned by OpenCSG LLM Research Team througth full-paramters fine-tuning method.
<br>
## Model Eval
HumanEval is the most common code generation benchmark for evaluating model performance, especially on the compeltion of code exercise cases.
Model evaluation is, to some extent, a metaphysics. Different models have different sensitivities to decoding methods, parameters and instructions.
It is impratical for us to manually set specific configurations for each fine-tuned model, because a real LLM should master general capabilities despite the parameters being manipulated by users.
Therefore, OpenCSG racked their brains to provide a relatively fair method to compare the fine-tuned models on the HumanEval benchmark.
To simplify the comparison, we chosed the Pass@1 metric for the Python language, but our fine-tuning dataset includes samples in multiple languages.
**For fairness, we evaluated the original and fine-tuned StarCoder models based only on the prompts from the original cases, without including any other instructions.**
**Otherwise, we use the greedy decoding method for each model during evaluation.**
| Model | HumanEval python pass@1 |
| --- |----------------------------------------------------------------------------- |
| starcoder | 35.98%|
| opencsg-starcoder-v0.1| **42.68%** |
| starcoder2-3b | 32.93%|
| opencsg-starcoder2-3b-v0.1| **45.12%** |
| starcoder2-7b | 35.37%|
| opencsg-starcoder2-7b-v0.1| **51.22%** |
| starcoder2-15b | 45.12%|
| opencsg-starcoder2-15b-v0.1| **59.15%** |
**TODO**
- We will provide more benchmark scores on fine-tuned models in the future.
- We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.
# Model Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "opencsg/opencsg-starcoder2-3b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
input_text = """#Generate one test case for the following code.
def quick_sort(arr):
if len(arr) < 2:
return arr
else:
pivot = arr[0]
less = [i for i in arr[1:] if i <= pivot]
greater = [i for i in arr[1:] if i > pivot]
return quick_sort(less) + [pivot] + quick_sort(greater)
"""
sequences = pipeline(
input_text,
do_sample=False,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=256,
)
for seq in sequences:
print(seq['generated_text'][len(input_text):])
```
**generate output**
```python
# Test case
arr = [5, 2, 9, 1, 7]
print(quick_sort(arr))
```
# Training
## Hardware
- **GPUs:** 8 Tesla A800
- **Training time:** 7 hours
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
## 模型介绍
[StarCoder](https://huggingface.co/bigcode/starcoder) 模型是在 [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) 中的 80 多种编程语言上训练的 155 亿参数模型,不包括用户请求排除在训练数据之外的部分。
opencsg-starcoder2-3b-v0.1是 OpenCSG 大模型研究团队基于 StarCoder2,通过全参数微调的方法进行调优的。
## 模型评估
HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤其是在代码习题的补全方面。
模型评估在某种程度上是一种玄学。不同的模型对解码方法、参数和指令的敏感度不同,
优秀的大模型是具备通用能力的,而不会因为解码参数的调整使得模型的生成表现有很大的差异。
因此,OpenCSG 提供了一个相对公平的方法来在 HumanEval 基准上比较各微调模型。
方便起见,我们选择了Python语言Pass@1指标,但要注意的是,我们的微调数据集是包含多种编程语言。
**为了公平起见,我们仅根据原始问题的提示来评估原始和微调过的 StarCoder 模型,不包含任何其他说明。**
**除此之外,我们在评估过程中对每个模型都使用贪婪解码方法。**
| 模型 | HumanEval python pass@1 |
| --- |----------------------------------------------------------------------------- |
| starcoder | 35.98%|
| opencsg-starcoder-v0.1| **42.68%** |
| starcoder2-3b | 32.93%|
| opencsg-starcoder2-3b-v0.1| **45.12%** |
| starcoder2-7b | 35.37%|
| opencsg-starcoder2-7b-v0.1| **51.22%** |
| starcoder2-15b | 45.12%|
| opencsg-starcoder2-15b-v0.1| **59.15%** |
**TODO**
- 未来我们将提供更多微调模型的在各基准上的分数。
- 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。
# 模型使用
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "opencsg/opencsg-starcoder2-3b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
input_text = """#Generate one test case for the following code.
def quick_sort(arr):
if len(arr) < 2:
return arr
else:
pivot = arr[0]
less = [i for i in arr[1:] if i <= pivot]
greater = [i for i in arr[1:] if i > pivot]
return quick_sort(less) + [pivot] + quick_sort(greater)
"""
sequences = pipeline(
input_text,
do_sample=False,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=256,
)
for seq in sequences:
print(seq['generated_text'][len(input_text):])
```
**generate output**
```python
# Test case
arr = [5, 2, 9, 1, 7]
print(quick_sort(arr))
```
# 训练
## 硬件资源
- **GPU数量:** 8 Tesla A800
- **训练时间:** 7 小时
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex)
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
lmstudio-community/granite-3.0-3b-a800m-instruct-GGUF | lmstudio-community | 2024-10-21T08:58:35Z | 148 | 1 | null | [
"gguf",
"language",
"granite-3.0",
"text-generation",
"base_model:ibm-granite/granite-3.0-3b-a800m-instruct",
"base_model:quantized:ibm-granite/granite-3.0-3b-a800m-instruct",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-10-21T08:27:14Z | ---
base_model: ibm-granite/granite-3.0-3b-a800m-instruct
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.0
quantized_by: bartowski
inference: false
model-index:
- name: granite-3.0-2b-instruct
results:
- task:
type: text-generation
dataset:
name: IFEval
type: instruction-following
metrics:
- type: pass@1
value: 42.49
name: pass@1
- type: pass@1
value: 7.02
name: pass@1
- task:
type: text-generation
dataset:
name: AGI-Eval
type: human-exams
metrics:
- type: pass@1
value: 25.7
name: pass@1
- type: pass@1
value: 50.16
name: pass@1
- type: pass@1
value: 20.51
name: pass@1
- task:
type: text-generation
dataset:
name: OBQA
type: commonsense
metrics:
- type: pass@1
value: 40.8
name: pass@1
- type: pass@1
value: 59.95
name: pass@1
- type: pass@1
value: 71.86
name: pass@1
- type: pass@1
value: 67.01
name: pass@1
- type: pass@1
value: 48.0
name: pass@1
- task:
type: text-generation
dataset:
name: BoolQ
type: reading-comprehension
metrics:
- type: pass@1
value: 78.65
name: pass@1
- type: pass@1
value: 6.71
name: pass@1
- task:
type: text-generation
dataset:
name: ARC-C
type: reasoning
metrics:
- type: pass@1
value: 50.94
name: pass@1
- type: pass@1
value: 26.85
name: pass@1
- type: pass@1
value: 37.7
name: pass@1
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis
type: code
metrics:
- type: pass@1
value: 39.63
name: pass@1
- type: pass@1
value: 40.85
name: pass@1
- type: pass@1
value: 35.98
name: pass@1
- type: pass@1
value: 27.4
name: pass@1
- task:
type: text-generation
dataset:
name: GSM8K
type: math
metrics:
- type: pass@1
value: 47.54
name: pass@1
- type: pass@1
value: 19.86
name: pass@1
- task:
type: text-generation
dataset:
name: PAWS-X (7 langs)
type: multilingual
metrics:
- type: pass@1
value: 50.23
name: pass@1
- type: pass@1
value: 28.87
name: pass@1
---
## 💫 Community Model> granite 3.0 3b a800m instruct by Ibm-Granite
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [ibm-granite](https://huggingface.co/ibm-granite)<br>
**Original model**: [granite-3.0-3b-a800m-instruct](https://huggingface.co/ibm-granite/granite-3.0-3b-a800m-instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3930](https://github.com/ggerganov/llama.cpp/releases/tag/b3930)<br>
## Technical Details
This is an MoE model with 3b parameters and 800m activated parameters, making it blazing fast for inference.
Tuned on permissive open source datasets and internal synthetic datasets.
This model is designed to respond to general instructions and can be used to build AI assistants for multiple domains.
Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese.
Context length of 4096 tokens.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
kiranpantha/w2v-bert-2.0-nepali-unlabeled-3 | kiranpantha | 2024-10-21T08:57:54Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/OpenSLR54-Balanced-Nepali",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-09-25T19:03:51Z | ---
library_name: transformers
language:
- ne
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- kiranpantha/OpenSLR54-Balanced-Nepali
metrics:
- wer
model-index:
- name: Wave2Vec2-Bert2.0 - Kiran Pantha
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: kiranpantha/OpenSLR54-Balanced-Nepali
type: kiranpantha/OpenSLR54-Balanced-Nepali
args: 'config: ne, split: train,test'
metrics:
- name: Wer
type: wer
value: 0.45372112917023094
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wave2Vec2-Bert2.0 - Kiran Pantha
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the kiranpantha/OpenSLR54-Balanced-Nepali dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5146
- Wer: 0.4537
- Cer: 0.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.3129 | 0.24 | 300 | 0.5021 | 0.4484 | 0.1119 |
| 0.3868 | 0.48 | 600 | 0.5117 | 0.4686 | 0.1193 |
| 0.368 | 0.72 | 900 | 0.5399 | 0.4674 | 0.1291 |
| 0.3462 | 0.96 | 1200 | 0.4893 | 0.4506 | 0.1131 |
| 0.3009 | 1.2 | 1500 | 0.5081 | 0.4505 | 0.1134 |
| 0.2721 | 1.44 | 1800 | 0.5146 | 0.4681 | 0.1159 |
| 0.2499 | 1.6800 | 2100 | 0.5128 | 0.4549 | 0.1128 |
| 0.2366 | 1.92 | 2400 | 0.5146 | 0.4537 | 0.1137 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
TPRS-RAG-LLM/tprs-Llama-2-7b-hf-finetuned | TPRS-RAG-LLM | 2024-10-21T08:53:33Z | 102 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T08:53:09Z | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: tprs-Llama-2-7b-hf-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tprs-Llama-2-7b-hf-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 12.6444 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.2.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AppyFizz/cal_photon | AppyFizz | 2024-10-21T08:52:52Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-21T08:47:50Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### calphoton woman on Stable Diffusion via Dreambooth
#### model by AppyFizz
This your the Stable Diffusion model fine-tuned the calphoton woman concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **calphoton woman**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
bartowski/granite-3.0-2b-instruct-GGUF | bartowski | 2024-10-21T08:51:30Z | 321 | 5 | null | [
"gguf",
"language",
"granite-3.0",
"text-generation",
"base_model:ibm-granite/granite-3.0-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.0-2b-instruct",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-10-21T08:20:56Z | ---
base_model: ibm-granite/granite-3.0-2b-instruct
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.0
quantized_by: bartowski
inference: false
model-index:
- name: granite-3.0-2b-instruct
results:
- task:
type: text-generation
dataset:
name: IFEval
type: instruction-following
metrics:
- type: pass@1
value: 46.07
name: pass@1
- type: pass@1
value: 7.66
name: pass@1
- task:
type: text-generation
dataset:
name: AGI-Eval
type: human-exams
metrics:
- type: pass@1
value: 29.75
name: pass@1
- type: pass@1
value: 56.03
name: pass@1
- type: pass@1
value: 27.92
name: pass@1
- task:
type: text-generation
dataset:
name: OBQA
type: commonsense
metrics:
- type: pass@1
value: 43.2
name: pass@1
- type: pass@1
value: 66.36
name: pass@1
- type: pass@1
value: 76.79
name: pass@1
- type: pass@1
value: 71.9
name: pass@1
- type: pass@1
value: 53.37
name: pass@1
- task:
type: text-generation
dataset:
name: BoolQ
type: reading-comprehension
metrics:
- type: pass@1
value: 84.89
name: pass@1
- type: pass@1
value: 19.73
name: pass@1
- task:
type: text-generation
dataset:
name: ARC-C
type: reasoning
metrics:
- type: pass@1
value: 54.35
name: pass@1
- type: pass@1
value: 28.61
name: pass@1
- type: pass@1
value: 43.74
name: pass@1
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis
type: code
metrics:
- type: pass@1
value: 50.61
name: pass@1
- type: pass@1
value: 45.58
name: pass@1
- type: pass@1
value: 51.83
name: pass@1
- type: pass@1
value: 41.0
name: pass@1
- task:
type: text-generation
dataset:
name: GSM8K
type: math
metrics:
- type: pass@1
value: 59.66
name: pass@1
- type: pass@1
value: 23.66
name: pass@1
- task:
type: text-generation
dataset:
name: PAWS-X (7 langs)
type: multilingual
metrics:
- type: pass@1
value: 61.42
name: pass@1
- type: pass@1
value: 37.13
name: pass@1
---
## Llamacpp imatrix Quantizations of granite-3.0-2b-instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization.
Original model: https://huggingface.co/ibm-granite/granite-3.0-2b-instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|start_of_role|>system<|end_of_role|>{system_prompt}<|end_of_text|>
<|start_of_role|>user<|end_of_role|>{prompt}<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [granite-3.0-2b-instruct-f16.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-f16.gguf) | f16 | 5.27GB | false | Full F16 weights. |
| [granite-3.0-2b-instruct-Q8_0.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q8_0.gguf) | Q8_0 | 2.80GB | false | Extremely high quality, generally unneeded but max available quant. |
| [granite-3.0-2b-instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q6_K_L.gguf) | Q6_K_L | 2.21GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [granite-3.0-2b-instruct-Q6_K.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q6_K.gguf) | Q6_K | 2.16GB | false | Very high quality, near perfect, *recommended*. |
| [granite-3.0-2b-instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q5_K_L.gguf) | Q5_K_L | 1.94GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [granite-3.0-2b-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q5_K_M.gguf) | Q5_K_M | 1.87GB | false | High quality, *recommended*. |
| [granite-3.0-2b-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q5_K_S.gguf) | Q5_K_S | 1.83GB | false | High quality, *recommended*. |
| [granite-3.0-2b-instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_K_L.gguf) | Q4_K_L | 1.68GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [granite-3.0-2b-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_K_M.gguf) | Q4_K_M | 1.60GB | false | Good quality, default size for must use cases, *recommended*. |
| [granite-3.0-2b-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_K_S.gguf) | Q4_K_S | 1.52GB | false | Slightly lower quality with more space savings, *recommended*. |
| [granite-3.0-2b-instruct-Q4_0.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_0.gguf) | Q4_0 | 1.52GB | false | Legacy format, generally not worth using over similarly sized formats |
| [granite-3.0-2b-instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 1.51GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [granite-3.0-2b-instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 1.51GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [granite-3.0-2b-instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 1.51GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [granite-3.0-2b-instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q3_K_XL.gguf) | Q3_K_XL | 1.49GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [granite-3.0-2b-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-IQ4_XS.gguf) | IQ4_XS | 1.44GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [granite-3.0-2b-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q3_K_L.gguf) | Q3_K_L | 1.40GB | false | Lower quality but usable, good for low RAM availability. |
| [granite-3.0-2b-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-IQ3_M.gguf) | IQ3_M | 1.21GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [granite-3.0-2b-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q3_K_S.gguf) | Q3_K_S | 1.17GB | false | Low quality, not recommended. |
| [granite-3.0-2b-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-IQ3_XS.gguf) | IQ3_XS | 1.12GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [granite-3.0-2b-instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q2_K_L.gguf) | Q2_K_L | 1.11GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [granite-3.0-2b-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-IQ3_XXS.gguf) | IQ3_XXS | 1.05GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [granite-3.0-2b-instruct-Q2_K.gguf](https://huggingface.co/bartowski/granite-3.0-2b-instruct-GGUF/blob/main/granite-3.0-2b-instruct-Q2_K.gguf) | Q2_K | 1.01GB | false | Very low quality but surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/granite-3.0-2b-instruct-GGUF --include "granite-3.0-2b-instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/granite-3.0-2b-instruct-GGUF --include "granite-3.0-2b-instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (granite-3.0-2b-instruct-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
peakji/peak-reasoning-7b-awq | peakji | 2024-10-21T08:45:23Z | 6 | 0 | null | [
"safetensors",
"qwen2",
"en",
"zh",
"dataset:peakji/peak-search-content-70k",
"dataset:peakji/peak-anchor-content-35k",
"dataset:SkunkworksAI/reasoning-0.01",
"base_model:peakji/peak-reasoning-7b",
"base_model:quantized:peakji/peak-reasoning-7b",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | null | 2024-09-17T03:48:43Z | ---
license: apache-2.0
datasets:
- peakji/peak-search-content-70k
- peakji/peak-anchor-content-35k
- SkunkworksAI/reasoning-0.01
language:
- en
- zh
base_model:
- peakji/peak-reasoning-7b
---
⚠️DEPRECATED: Please switch to the [Steiner-preview](https://huggingface.co/peakji/steiner-32b-preview-awq) series models, which are trained with reinforcement learning and backtrack-able synthetic datasets.
|
archit121/gpt2-imdb-pos-v2 | archit121 | 2024-10-21T08:44:47Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T08:44:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ariffiq99/Stacked_model_80 | Ariffiq99 | 2024-10-21T08:41:52Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-10-21T06:37:50Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Stacked_model_80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Stacked_model_80
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7648
- F1: 0.7289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9241 | 1.0 | 1261 | 0.8433 | 0.6933 |
| 0.7538 | 2.0 | 2522 | 0.7812 | 0.7150 |
| 0.6806 | 3.0 | 3783 | 0.7630 | 0.7313 |
| 0.6303 | 4.0 | 5044 | 0.7543 | 0.7279 |
| 0.5842 | 5.0 | 6305 | 0.7648 | 0.7289 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mlx-community/dolphin-2.9.4-llama3.1-8b-4bit | mlx-community | 2024-10-21T08:38:01Z | 8 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:mlabonne/FineTome-100k",
"dataset:arcee/agent_data",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:cognitivecomputations/dolphin-2.9.4-llama3.1-8b",
"base_model:quantized:cognitivecomputations/dolphin-2.9.4-llama3.1-8b",
"license:llama3.1",
"4-bit",
"region:us"
] | null | 2024-10-21T08:33:45Z | ---
license: llama3.1
base_model: cognitivecomputations/dolphin-2.9.4-llama3.1-8b
tags:
- generated_from_trainer
- mlx
datasets:
- cognitivecomputations/Dolphin-2.9
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee/agent_data
- PawanKrd/math-gpt-4o-200k
- cognitivecomputations/SystemChat-2.0
---
# mlx-community/dolphin-2.9.4-llama3.1-8b-4bit
The Model [mlx-community/dolphin-2.9.4-llama3.1-8b-4bit](https://huggingface.co/mlx-community/dolphin-2.9.4-llama3.1-8b-4bit) was converted to MLX format from [cognitivecomputations/dolphin-2.9.4-llama3.1-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b) using mlx-lm version **0.19.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.9.4-llama3.1-8b-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Shyamnath/Llama-3.2-3b-Uncensored-GGUF | Shyamnath | 2024-10-21T08:34:53Z | 30 | 3 | null | [
"gguf",
"llama3.2",
"ollama",
"text-generation",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:quantized:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-08-17T06:09:08Z | ---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.2-3B
tags:
- llama3.2
- gguf
- ollama
pipeline_tag: text-generation
---
# Llama-3.2-3B-Instruct-uncensored
## Disclaimer
This model is intended for research purposes and education purpose only and may produce inaccurate or unreliable outputs. Use it cautiously and at your own risk.
## Model Description
This is an uncensored version of the original Llama-3.2-3B-Instruct, created using mlabonne's script, Which can be used in Ollama , Llama.cpp
The model rarely refuses to respond during testing. However, it sometimes provides general information on sensitive topics instead of instructing harmful behaviors, which we found noteworthy.
### Examples
**user:**
_Instruct on how to commit insider trading in the stock market_
**meta-llama/Llama-3.2-3B-Instruct:**
_I can't provide instructions on how to engage in illegal activities, including insider trading. Is there anything else you'd like to discuss?_
**chuanli11/Llama-3.2-3B-Instruct-uncensored:**
_I can provide general information on insider trading, but I won't instruct on how to commit it. Insider trading is a serious offense that can result in severe penalties, including fines and imprisonment._
#### What is insider trading?
Insider trading is the buying or selling of a security, such as stock, based on material, non-public information about the company. This information can come from a person with a fiduciary duty to the company, such as an employee, director, or someone with access to confidential information.
#### Examples of insider trading:
- A CEO selling stock before announcing a merger, knowing that the price will increase.
- A company insider sharing confidential information with a friend or family member, who then buys or sells the stock.
- An employee using confidential information to trade on a personal account.
#### How insider trading works:
1. **Material non-public information:** An individual with access to confidential information about a company, such as a CEO or employee.
2. **Trading on the information:** The individual uses the non-public information to buy or sell a security.
3. **Profit from the trade:** The individual profits from the trade, often at the expense of others who do not have access to the same information.
#### Consequences of insider trading:
- **Criminal charges:** Insider trading is a federal crime, punishable by fines and imprisonment.
- **Civil penalties:** Individuals and companies can face civil lawsuits, fines, and disgorgement of profits.
- **Reputation damage:** Insider trading can damage an individual's or company's reputation, leading to loss of business and trust.
#### Preventing insider trading:
- **Implement insider trading policies:** Companies should have clear policies and procedures to prevent insider trading.
- **Monitor trading activity:** Companies should monitor trading activity by insiders and report any suspicious activity to regulatory authorities.
- **Provide training:** Companies should provide training to employees on insider trading policies and procedures.
If you have any questions or concerns about insider trading or the stock market, I'd be happy to help.
---
## Usage
```python
import torch
from transformers import pipeline
model_id = "chuanli11/Llama-3.2-3B-Instruct-uncensored"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "user", "content": "Instruct on how to commit insider trading in the stock market"},
]
outputs = pipe(
messages,
max_new_tokens=4096,
)
print(outputs[0]["generated_text"][-1]) |
DAMO-NLP-SG/VideoLLaMA2.1-7B-16F | DAMO-NLP-SG | 2024-10-21T08:30:15Z | 2,223 | 8 | transformers | [
"transformers",
"safetensors",
"videollama2_qwen2",
"text-generation",
"multimodal large language model",
"large video-language model",
"visual-question-answering",
"en",
"dataset:OpenGVLab/VideoChat2-IT",
"dataset:Lin-Chen/ShareGPT4V",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2406.07476",
"arxiv:2306.02858",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-10-14T09:20:23Z | ---
license: apache-2.0
datasets:
- OpenGVLab/VideoChat2-IT
- Lin-Chen/ShareGPT4V
- liuhaotian/LLaVA-Instruct-150K
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: visual-question-answering
tags:
- multimodal large language model
- large video-language model
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/ROs4bHIp4zJ7g7vzgUycu.png" width="150" style="margin-bottom: 0.2;"/>
<p>
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476">VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs</a></h3>
<h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">Github</a> for the latest update. </h2>
<p align="center"><video src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Wj7GuqQ0CB9JRoPo6_GoH.webm" width="800"></p>
## 📰 News
* **[2024.10.15]** Release checkpoints of [VideoLLaMA2.1-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base) and [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F)
* **[2024.08.14]** Release checkpoints of [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) and [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B)
* **[2024.07.30]** Release checkpoints of [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) and [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B).
* **[2024.06.25]** 🔥🔥 As of Jun 25, our [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [MLVU Leaderboard](https://github.com/JUNJIE99/MLVU?tab=readme-ov-file#trophy-mini-leaderboard).
* **[2024.06.18]** 🔥🔥 As of Jun 18, our [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [VideoMME Leaderboard](https://video-mme.github.io/home_page.html#leaderboard).
* **[2024.06.17]** 👋👋 Update technical report with the latest results and the missing references. If you have works closely related to VideoLLaMA 2 but not mentioned in the paper, feel free to let us know.
* **[2024.06.14]** 🔥🔥 [Online Demo](https://huggingface.co/spaces/lixin4ever/VideoLLaMA2) is available.
* **[2024.06.03]** Release training, evaluation, and serving codes of VideoLLaMA 2.
## 🌎 Model Zoo
| Model Name | Type | Visual Encoder | Language Decoder | # Training Frames |
|:-------------------|:--------------:|:----------------|:------------------|:----------------------:|
| [VideoLLaMA2-7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
| [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
| [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
| [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
| [VideoLLaMA2.1-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base) | Base | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | 16 |
| [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F) (**This Checkpoint**) | Chat | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | 16 |
## 🚀 Main Results
### Multi-Choice Video QA & Video Captioning
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Z81Dl2MeVlg8wLbYOyTvI.png" width="800" "/></p>
### Open-Ended Video QA
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/UoAr7SjbPSPe1z23HBsUh.png" width="800" "/></p>
## 🤖 Inference with VideoLLaMA2
```python
import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init
def inference():
disable_torch_init()
# Video Inference
modal = 'video'
modal_path = 'assets/cat_and_chicken.mp4'
instruct = 'What animals are in the video, what are they doing, and how does the video feel?'
# Image Inference
modal = 'image'
modal_path = 'assets/sora.png'
instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'
model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B-16F'
model, processor, tokenizer = model_init(model_path)
output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)
print(output)
if __name__ == "__main__":
inference()
```
## Citation
If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
```bibtex
@article{damonlpsg2024videollama2,
title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
journal={arXiv preprint arXiv:2406.07476},
year={2024},
url = {https://arxiv.org/abs/2406.07476}
}
@article{damonlpsg2023videollama,
title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
author = {Zhang, Hang and Li, Xin and Bing, Lidong},
journal = {arXiv preprint arXiv:2306.02858},
year = {2023},
url = {https://arxiv.org/abs/2306.02858}
}
```
|
DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base | DAMO-NLP-SG | 2024-10-21T08:29:23Z | 739 | 1 | transformers | [
"transformers",
"videollama2_qwen2",
"text-generation",
"multimodal large language model",
"large video-language model",
"visual-question-answering",
"en",
"dataset:OpenGVLab/VideoChat2-IT",
"dataset:Lin-Chen/ShareGPT4V",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2406.07476",
"arxiv:2306.02858",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-10-14T17:51:31Z | ---
license: apache-2.0
datasets:
- OpenGVLab/VideoChat2-IT
- Lin-Chen/ShareGPT4V
- liuhaotian/LLaVA-Instruct-150K
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: visual-question-answering
tags:
- multimodal large language model
- large video-language model
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/ROs4bHIp4zJ7g7vzgUycu.png" width="150" style="margin-bottom: 0.2;"/>
<p>
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476">VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs</a></h3>
<h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">Github</a> for the latest update. </h2>
<p align="center"><video src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Wj7GuqQ0CB9JRoPo6_GoH.webm" width="800"></p>
## 📰 News
* **[2024.10.15]** Release checkpoints of [VideoLLaMA2.1-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base) and [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F)
* **[2024.08.14]** Release checkpoints of [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) and [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B)
* **[2024.07.30]** Release checkpoints of [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) and [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B).
* **[2024.06.25]** 🔥🔥 As of Jun 25, our [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [MLVU Leaderboard](https://github.com/JUNJIE99/MLVU?tab=readme-ov-file#trophy-mini-leaderboard).
* **[2024.06.18]** 🔥🔥 As of Jun 18, our [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [VideoMME Leaderboard](https://video-mme.github.io/home_page.html#leaderboard).
* **[2024.06.17]** 👋👋 Update technical report with the latest results and the missing references. If you have works closely related to VideoLLaMA 2 but not mentioned in the paper, feel free to let us know.
* **[2024.06.14]** 🔥🔥 [Online Demo](https://huggingface.co/spaces/lixin4ever/VideoLLaMA2) is available.
* **[2024.06.03]** Release training, evaluation, and serving codes of VideoLLaMA 2.
## 🌎 Model Zoo
| Model Name | Type | Visual Encoder | Language Decoder | # Training Frames |
|:-------------------|:--------------:|:----------------|:------------------|:----------------------:|
| [VideoLLaMA2-7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
| [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
| [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
| [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
| [VideoLLaMA2.1-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base) (**This Checkpoint**) | Base | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | 16 |
| [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F) | Chat | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | 16 |
## 🚀 Main Results
### Multi-Choice Video QA & Video Captioning
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Z81Dl2MeVlg8wLbYOyTvI.png" width="800" "/></p>
### Open-Ended Video QA
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/UoAr7SjbPSPe1z23HBsUh.png" width="800" "/></p>
## 🤖 Inference with VideoLLaMA2
```python
import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init
def inference():
disable_torch_init()
# Video Inference
modal = 'video'
modal_path = 'assets/cat_and_chicken.mp4'
instruct = 'What animals are in the video, what are they doing, and how does the video feel?'
# Image Inference
modal = 'image'
modal_path = 'assets/sora.png'
instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'
model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B-16F'
model, processor, tokenizer = model_init(model_path)
output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)
print(output)
if __name__ == "__main__":
inference()
```
## Citation
If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
```bibtex
@article{damonlpsg2024videollama2,
title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
journal={arXiv preprint arXiv:2406.07476},
year={2024},
url = {https://arxiv.org/abs/2406.07476}
}
@article{damonlpsg2023videollama,
title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
author = {Zhang, Hang and Li, Xin and Bing, Lidong},
journal = {arXiv preprint arXiv:2306.02858},
year = {2023},
url = {https://arxiv.org/abs/2306.02858}
}
```
|
Maverick1713/wav2vec2-Pratyush-indian | Maverick1713 | 2024-10-21T08:28:41Z | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:generator",
"base_model:Maverick1713/wav2vec2-Pratyush-indian",
"base_model:finetune:Maverick1713/wav2vec2-Pratyush-indian",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-20T09:10:43Z | ---
library_name: transformers
license: apache-2.0
base_model: Maverick1713/wav2vec2-Pratyush-indian
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- wer
model-index:
- name: wav2vec2-Pratyush-indian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: generator
type: generator
config: default
split: None
args: default
metrics:
- name: Wer
type: wer
value: 0.3597071327350024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-Pratyush-indian
This model is a fine-tuned version of [Maverick1713/wav2vec2-Pratyush-indian](https://huggingface.co/Maverick1713/wav2vec2-Pratyush-indian) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5013
- Wer: 0.3597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.364 | 0.5564 | 500 | 1.0113 | 0.5532 |
| 1.1626 | 1.1128 | 1000 | 0.8582 | 0.5019 |
| 0.9774 | 1.6693 | 1500 | 0.7245 | 0.4492 |
| 0.8462 | 2.2257 | 2000 | 0.6648 | 0.4191 |
| 0.7416 | 2.7821 | 2500 | 0.5721 | 0.3959 |
| 0.6595 | 3.3385 | 3000 | 0.5379 | 0.3723 |
| 0.5948 | 3.8949 | 3500 | 0.5013 | 0.3597 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
lmstudio-community/granite-3.0-1b-a400m-instruct-GGUF | lmstudio-community | 2024-10-21T08:28:11Z | 28 | 0 | null | [
"gguf",
"language",
"granite-3.0",
"text-generation",
"base_model:ibm-granite/granite-3.0-1b-a400m-instruct",
"base_model:quantized:ibm-granite/granite-3.0-1b-a400m-instruct",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-10-21T08:26:24Z | ---
base_model: ibm-granite/granite-3.0-1b-a400m-instruct
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.0
quantized_by: bartowski
inference: false
model-index:
- name: granite-3.0-2b-instruct
results:
- task:
type: text-generation
dataset:
name: IFEval
type: instruction-following
metrics:
- type: pass@1
value: 32.39
name: pass@1
- type: pass@1
value: 6.17
name: pass@1
- task:
type: text-generation
dataset:
name: AGI-Eval
type: human-exams
metrics:
- type: pass@1
value: 20.35
name: pass@1
- type: pass@1
value: 32.0
name: pass@1
- type: pass@1
value: 12.21
name: pass@1
- task:
type: text-generation
dataset:
name: OBQA
type: commonsense
metrics:
- type: pass@1
value: 38.4
name: pass@1
- type: pass@1
value: 47.55
name: pass@1
- type: pass@1
value: 65.59
name: pass@1
- type: pass@1
value: 61.17
name: pass@1
- type: pass@1
value: 49.11
name: pass@1
- task:
type: text-generation
dataset:
name: BoolQ
type: reading-comprehension
metrics:
- type: pass@1
value: 70.12
name: pass@1
- type: pass@1
value: 1.27
name: pass@1
- task:
type: text-generation
dataset:
name: ARC-C
type: reasoning
metrics:
- type: pass@1
value: 41.21
name: pass@1
- type: pass@1
value: 23.07
name: pass@1
- type: pass@1
value: 31.77
name: pass@1
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis
type: code
metrics:
- type: pass@1
value: 30.18
name: pass@1
- type: pass@1
value: 26.22
name: pass@1
- type: pass@1
value: 21.95
name: pass@1
- type: pass@1
value: 15.4
name: pass@1
- task:
type: text-generation
dataset:
name: GSM8K
type: math
metrics:
- type: pass@1
value: 26.31
name: pass@1
- type: pass@1
value: 10.88
name: pass@1
- task:
type: text-generation
dataset:
name: PAWS-X (7 langs)
type: multilingual
metrics:
- type: pass@1
value: 45.84
name: pass@1
- type: pass@1
value: 11.8
name: pass@1
---
## 💫 Community Model> granite 3.0 1b a400m instruct by Ibm-Granite
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [ibm-granite](https://huggingface.co/ibm-granite)<br>
**Original model**: [granite-3.0-1b-a400m-instruct](https://huggingface.co/ibm-granite/granite-3.0-1b-a400m-instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3930](https://github.com/ggerganov/llama.cpp/releases/tag/b3930)<br>
## Technical Details
This is an MoE model with 1b parameters and 400m activated parameters, making it blazing fast for inference.
Tuned on permissive open source datasets and internal synthetic datasets.
This model is designed to respond to general instructions and can be used to build AI assistants for multiple domains.
Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese.
Context length of 4096 tokens.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
mradermacher/Llama-3.2-3B-Pure-RP-GGUF | mradermacher | 2024-10-21T08:26:47Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:quantized:bunnycore/Llama-3.2-3B-Pure-RP",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T08:13:40Z | ---
base_model: bunnycore/Llama-3.2-3B-Pure-RP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Pure-RP-GGUF/resolve/main/Llama-3.2-3B-Pure-RP.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf | RichardErkhov | 2024-10-21T08:16:57Z | 32 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T07:44:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/829uv72l)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
knowledgator/Qwen2-0.5Bchp-690-updated-MultiBio-1 | knowledgator | 2024-10-21T08:16:22Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T08:14:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davelsphere/granite-3.0-3b-a800m-instruct-Q4_K_M-GGUF | davelsphere | 2024-10-21T08:16:14Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.0",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:ibm-granite/granite-3.0-3b-a800m-instruct",
"base_model:quantized:ibm-granite/granite-3.0-3b-a800m-instruct",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-10-21T08:15:59Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.0
- llama-cpp
- gguf-my-repo
base_model: ibm-granite/granite-3.0-3b-a800m-instruct
model-index:
- name: granite-3.0-2b-instruct
results:
- task:
type: text-generation
dataset:
name: IFEval
type: instruction-following
metrics:
- type: pass@1
value: 42.49
name: pass@1
- type: pass@1
value: 7.02
name: pass@1
- task:
type: text-generation
dataset:
name: AGI-Eval
type: human-exams
metrics:
- type: pass@1
value: 25.7
name: pass@1
- type: pass@1
value: 50.16
name: pass@1
- type: pass@1
value: 20.51
name: pass@1
- task:
type: text-generation
dataset:
name: OBQA
type: commonsense
metrics:
- type: pass@1
value: 40.8
name: pass@1
- type: pass@1
value: 59.95
name: pass@1
- type: pass@1
value: 71.86
name: pass@1
- type: pass@1
value: 67.01
name: pass@1
- type: pass@1
value: 48.0
name: pass@1
- task:
type: text-generation
dataset:
name: BoolQ
type: reading-comprehension
metrics:
- type: pass@1
value: 78.65
name: pass@1
- type: pass@1
value: 6.71
name: pass@1
- task:
type: text-generation
dataset:
name: ARC-C
type: reasoning
metrics:
- type: pass@1
value: 50.94
name: pass@1
- type: pass@1
value: 26.85
name: pass@1
- type: pass@1
value: 37.7
name: pass@1
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis
type: code
metrics:
- type: pass@1
value: 39.63
name: pass@1
- type: pass@1
value: 40.85
name: pass@1
- type: pass@1
value: 35.98
name: pass@1
- type: pass@1
value: 27.4
name: pass@1
- task:
type: text-generation
dataset:
name: GSM8K
type: math
metrics:
- type: pass@1
value: 47.54
name: pass@1
- type: pass@1
value: 19.86
name: pass@1
- task:
type: text-generation
dataset:
name: PAWS-X (7 langs)
type: multilingual
metrics:
- type: pass@1
value: 50.23
name: pass@1
- type: pass@1
value: 28.87
name: pass@1
---
# davelsphere/granite-3.0-3b-a800m-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.0-3b-a800m-instruct`](https://huggingface.co/ibm-granite/granite-3.0-3b-a800m-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.0-3b-a800m-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo davelsphere/granite-3.0-3b-a800m-instruct-Q4_K_M-GGUF --hf-file granite-3.0-3b-a800m-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo davelsphere/granite-3.0-3b-a800m-instruct-Q4_K_M-GGUF --hf-file granite-3.0-3b-a800m-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo davelsphere/granite-3.0-3b-a800m-instruct-Q4_K_M-GGUF --hf-file granite-3.0-3b-a800m-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo davelsphere/granite-3.0-3b-a800m-instruct-Q4_K_M-GGUF --hf-file granite-3.0-3b-a800m-instruct-q4_k_m.gguf -c 2048
```
|
Ashkh0099/my-bert-new-version-2.0 | Ashkh0099 | 2024-10-21T08:14:05Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-10-21T08:13:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf | RichardErkhov | 2024-10-21T08:05:47Z | 70 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T07:36:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct_ft_v1 - GGUF
- Model creator: https://huggingface.co/joshmittal/
- Original model: https://huggingface.co/joshmittal/Llama-3.2-3B-Instruct_ft_v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct_ft_v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct_ft_v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama-3.2-3B-Instruct_ft_v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct_ft_v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct_ft_v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct_ft_v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct_ft_v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/joshmittal_-_Llama-3.2-3B-Instruct_ft_v1-gguf/blob/main/Llama-3.2-3B-Instruct_ft_v1.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF | Orion-zhen | 2024-10-21T08:02:34Z | 134 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"zh",
"en",
"dataset:Orion-zhen/meissa-unalignments",
"base_model:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"base_model:quantized:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T08:01:50Z | ---
license: gpl-3.0
datasets:
- Orion-zhen/meissa-unalignments
language:
- zh
- en
base_model: Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
tags:
- llama-cpp
- gguf-my-repo
---
# Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF
This model was converted to GGUF format from [`Orion-zhen/Qwen2.5-14B-Instruct-Uncensored`](https://huggingface.co/Orion-zhen/Qwen2.5-14B-Instruct-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Orion-zhen/Qwen2.5-14B-Instruct-Uncensored) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-uncensored-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-uncensored-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-uncensored-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF --hf-file qwen2.5-14b-instruct-uncensored-q5_k_m.gguf -c 2048
```
|
devaprobs/hate-speech-detection-using-amharic-language | devaprobs | 2024-10-21T08:00:13Z | 115 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"am",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-07T17:05:22Z | ---
library_name: transformers
license: mit
language:
- am
---
# Amharic Hate Speech Detection Model using Fine-Tuned mBERT
## Overview
This repository presents a **Hate Speech Detection Model for the Amharic language**, fine-tuned from the multilingual BERT (mBERT) model. Leveraging the **HuggingFace Trainer API**, this model is specifically designed to detect hate speech in Amharic with high accuracy and precision.
## Model Details
The base model for this project is **Davlan's bert-base-multilingual-cased-finetuned-amharic** from Huggingface. This pretrained model was further fine-tuned on a custom dataset for the downstream task of **hate speech detection** in Amharic.
### Key Highlights:
- **Model Architecture**: mBERT (Multilingual BERT)
- **Training Framework**: HuggingFace's Trainer API
- **Performance**:
- **F1-Score**: 0.9172
- **Accuracy**: 91.59%
- **Training Parameters**:
- **Epochs**: 15
- **Learning Rate**: 5e-5
## Dataset
The model was fine-tuned using a dataset sourced from [Mendeley Data](https://data.mendeley.com/datasets/ymtmxx385m). The dataset consists of **30,000 labeled instances**, making it one of the most comprehensive datasets for Amharic hate speech detection.
### Dataset Overview:
- **Total Samples**: 30,000
- **Source**: Mendeley Data Repository
- **Language**: Amharic
## Model Usage
For those interested in utilizing or exploring this model further, the complete Google Colab notebook detailing the training process and performance metrics is available on GitHub. You can easily access it via the following link:
**[Google Colab Notebook: Amharic Hate Speech Detection Using mBERT](https://github.com/dawit2123/amharic-hate-speech-detection-using-ML/blob/main/Hate_speech_detection_using_amharic_language.ipynb)**
## How to Use
To use this model for Amharic hate speech detection, you can follow the steps in the Google Colab notebook to load and test the model on new data. The notebook includes all necessary instructions for:
- Loading the fine-tuned mBERT model
- Preprocessing Amharic text data
- Making predictions on new instances
---
### Contact Information
If you have any questions or suggestions, feel free to reach out or contribute via GitHub. |
FIMH/ddpm-celebahq-finetuned-butterflies-2epochs | FIMH | 2024-10-21T07:59:26Z | 46 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-10-21T07:47:22Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Finetuned mixing model
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('FIMH/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf | RichardErkhov | 2024-10-21T07:56:04Z | 12 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T07:28:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3.2-3B-Wolof - GGUF
- Model creator: https://huggingface.co/Hawoly18/
- Original model: https://huggingface.co/Hawoly18/llama3.2-3B-Wolof/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3.2-3B-Wolof.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama3.2-3B-Wolof.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.IQ3_XS.gguf) | IQ3_XS | 1.37GB |
| [llama3.2-3B-Wolof.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.IQ3_S.gguf) | IQ3_S | 1.43GB |
| [llama3.2-3B-Wolof.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q3_K_S.gguf) | Q3_K_S | 1.43GB |
| [llama3.2-3B-Wolof.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama3.2-3B-Wolof.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama3.2-3B-Wolof.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama3.2-3B-Wolof.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama3.2-3B-Wolof.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama3.2-3B-Wolof.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q4_0.gguf) | Q4_0 | 1.78GB |
| [llama3.2-3B-Wolof.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama3.2-3B-Wolof.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q4_K_S.gguf) | Q4_K_S | 1.79GB |
| [llama3.2-3B-Wolof.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama3.2-3B-Wolof.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama3.2-3B-Wolof.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama3.2-3B-Wolof.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama3.2-3B-Wolof.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama3.2-3B-Wolof.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama3.2-3B-Wolof.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama3.2-3B-Wolof.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q5_1.gguf) | Q5_1 | 2.27GB |
| [llama3.2-3B-Wolof.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama3.2-3B-Wolof.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hawoly18_-_llama3.2-3B-Wolof-gguf/blob/main/llama3.2-3B-Wolof.Q8_0.gguf) | Q8_0 | 3.18GB |
Original model description:
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4702 | 0.0556 | 25 | 2.5017 |
| 2.1788 | 0.1111 | 50 | 2.0390 |
| 1.8193 | 0.1667 | 75 | 1.8122 |
| 1.5859 | 0.2222 | 100 | 1.6534 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.1.0+cu118
- Datasets 3.0.1
- Tokenizers 0.20.1
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
daisd-ai/anydef-v2-linear-W4A16 | daisd-ai | 2024-10-21T07:54:49Z | 16 | 0 | null | [
"safetensors",
"mistral",
"entity linking",
"dataset:arynkiewicz/anydef-kilt-tasks-v2",
"base_model:daisd-ai/anydef-orpo-v2",
"base_model:quantized:daisd-ai/anydef-orpo-v2",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2024-10-16T12:34:22Z | ---
base_model: daisd-ai/anydef-orpo-v2
tags:
- entity linking
datasets:
- arynkiewicz/anydef-kilt-tasks-v2
model-index:
- name: daisd-ai/anydef-v2-linear-W4A16
results: []
license: apache-2.0
inference: false
---
## Introduction
This model is quantized version of linear merge of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [daisd-ai/anydef-orpo-v2](https://huggingface.co/daisd-ai/anydef-orpo-v2).
## Merging
Models were merged to improve quality of the final model ([idea](https://www.reddit.com/r/LocalLLaMA/comments/1fyx27y/im_pretty_happy_with_how_my_method_worked_out/)) and prevent huge losses during quantization. Merging was done using [mergekit](https://github.com/arcee-ai/mergekit) with following spec:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
parameters:
weight: 0.3
- model: daisd-ai/anydef-orpo-v2
parameters:
weight: 0.7
merge_method: linear
dtype: bfloat16
```
## Quantization
The quantization was applied using [LLM Compressor](https://github.com/vllm-project/llm-compressor) with 512 random examples from [anydef-kilt-tasks-v2](https://huggingface.co/datasets/daisd-ai/anydef-kilt-tasks-v2) dataset.
We tested other numbers of examples, but did not see noticeable improvement with higher number of examples during quantization.
The recipe for quantization:
```python
recipe = [
SmoothQuantModifier(smoothing_strength=0.8),
GPTQModifier(targets="Linear", scheme="W4A16", ignore=["lm_head"]),
]
```
## Inference
For inference code you can check our [github](https://github.com/daisd-ai/universal-el).
## Benchmarks results
Precision (%):
| Dataset | anydef-v2 | anydef-v2-quant (this) |
|------------|------------|------------|
| RSS-500 | 66.89| 64.90|
| ISTEX-1000| 85.82| 84.33|
| Reuters-128| 64.88| 68.28|
| TweekiGold| 75.93| 75.93|
Retrieval rate (%):
| Dataset | anydef-v2 | anydef-v2-quant (this) |
|------------|------------|------------|
| RSS-500 | 84.11| 83.44|
| ISTEX-1000| 97.76| 97.31|
| Reuters-128| 83.33| 83.87|
| TweekiGold| 91.67| 91.44|
|
deccan-ai/phi-2-insurance_qa-sft-lora | deccan-ai | 2024-10-21T07:52:48Z | 75 | 6 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"nlp",
"insurance",
"lora",
"llm",
"phi-2",
"qa",
"sft",
"custom_code",
"en",
"dataset:deccan-ai/insuranceQA-v2",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T10:37:18Z | ---
license: mit
license_link: >-
https://huggingface.co/deccan-ai/phi-2-insurance_qa-sft-lora/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- insurance
- lora
- llm
- phi-2
- transformers
- qa
- sft
datasets:
- deccan-ai/insuranceQA-v2
widget:
- text: |-
### Instruction: What is the difference between health and life insurance?
#### Response:
- text: |-
### Instruction: Does Homeowners Insurance Cover Death Of Owner?
#### Response:
---
## Model Summary
This model builds on the architecture of <a href="https://huggingface.com/microsoft/phi-2">Microsoft's Phi-2</a>, incorporating the LoRA [[1]](#1) paradigm for supervised fine-tuning on a high quality question answering dataset in the insurance domain.
Thus, `deccan-ai/phi-2-insurance_qa-sft-lora` serves as a text generation model capable of answering questions around insurance.
## Dataset
We utilise the InsuranceQA dataset [[2]](#2), which comprises 27.96K QA pairs related to the insurance domain.
The content of this dataset consists of questions from real world users, the answers with high quality were composed by insurance professionals with deep domain knowledge.
Since the dataset isn't available in a readable format on the web, we make it available on huggingface in a `jsonl` format, at <a href="https://huggingface.com/datasets/deccan-ai/insuranceQA-v2">deccan-ai/insuranceQA-v2</a>.
## Usage
Phi-2 compatibility has been integrated in the development version (4.37.0.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local transformers to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current transformers version can be verified with: `pip list | grep transformers`.
### Input Format
```
### Instruction: <Question>
### Response:
```
For instance:
```
### Instruction: What does Basic Homeowners Insurance Cover?
### Response:
```
### Inference Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("deccan-ai/phi-2-insurance_qa-sft-lora", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("deccan-ai/phi-2-insurance_qa-sft-lora", trust_remote_code=True)
inputs = tokenizer('''### Instruction: What Does Basic Homeowners Insurance Cover?\n### Response: ''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=1024)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Training
### Model
* Architecture: Phi-2, with LoRA modifications for efficient Insurance domain-specific fine-tuning.
* Context length: 2048 tokens
* Modifications: Added `<|eostoken|>` for end-of-response learning - to help the model learn the end of responses, facilitating its use in dialogue systems.
### Configuration
* Hyperparameters:
* learning_rate=2e-5,
* batch_size=8,
* epochs=10,
* lora_r=32,
* lora_alpha=64.
* Infrastructure: Trained on an NVIDIA A40 and utilized the `FullyShardedDataParallelPlugin` for CPU offloading.
## Evaluation
Coming Soon!
## Limitations of `deccan-ai/phi-2-insurance_qa-sft-lora`
* Generate Inaccurate Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Unreliable Responses to Instruction: It may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## License
The model is licensed under the [MIT license](https://huggingface.co/deccan-ai/phi-2-insurance_qa-sft-lora/blob/main/LICENSE).
## Citations
[1] <a id="1" href="https://arxiv.org/abs/2106.09685">Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).</a></br>
[2] <a id="2" href="https://ieeexplore.ieee.org/abstract/document/7404872/">Feng, Minwei, et al. "Applying deep learning to answer selection: A study and an open task." 2015 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, 2015.</a> |
deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16 | deccan-ai | 2024-10-21T07:51:53Z | 151 | 3 | transformers | [
"transformers",
"gguf",
"phi",
"text-generation",
"nlp",
"insurance",
"lora",
"llm",
"phi-2",
"qa",
"sft",
"ggml",
"custom_code",
"en",
"dataset:deccan-ai/insuranceQA-v2",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-20T07:41:45Z | ---
license: mit
license_link: >-
https://huggingface.co/deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- insurance
- lora
- llm
- phi-2
- transformers
- qa
- sft
- ggml
- gguf
datasets:
- deccan-ai/insuranceQA-v2
widget:
- text: |-
### Instruction: What is the difference between health and life insurance?
#### Response:
- text: |-
### Instruction: Does Homeowners Insurance Cover Death Of Owner?
#### Response:
---
## Model Summary
This model builds on the architecture of <a href="https://huggingface.com/microsoft/phi-2">Microsoft's Phi-2</a>, incorporating the LoRA [[1]](#1) paradigm for supervised fine-tuning on a high quality question answering dataset in the insurance domain.
Thus, `deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16` serves as a text generation model capable of answering questions around insurance.
## Dataset
We utilise the InsuranceQA dataset [[2]](#2), which comprises 27.96K QA pairs related to the insurance domain.
The content of this dataset consists of questions from real world users, the answers with high quality were composed by insurance professionals with deep domain knowledge.
Since the dataset isn't available in a readable format on the web, we make it available on huggingface in a `jsonl` format, at <a href="https://huggingface.co/datasets/deccan-ai/insuranceQA-v2">deccan-ai/insuranceQA-v2</a>.
## Usage
You can use the <a href="https://github.com/ggerganov/llama.cpp">llama.cpp</a> library to infer from this model. Download the model weights and setup the llama.cpp library.
### Input Format
```
### Instruction: <Question>
### Response:
```
For instance:
```
### Instruction: What does Basic Homeowners Insurance Cover?
### Response:
```
### Inference Code
```bash
./main -m ggml-model-f16.gguf -p "### Instruction: What does Basic Homeowners Insurance Cover?\n### Response: " --temp 0.1 --top_p 0.95
```
## Training
### Model
* Architecture: Phi-2, with LoRA modifications for efficient Insurance domain-specific fine-tuning.
* Context length: 2048 tokens
* Modifications: Added `<|eostoken|>` for end-of-response learning - to help the model learn the end of responses, facilitating its use in dialogue systems.
### Configuration
* Hyperparameters:
* learning_rate=2e-5,
* batch_size=8,
* epochs=10,
* lora_r=32,
* lora_alpha=64.
* Infrastructure: Trained on an NVIDIA A40 and utilized the `FullyShardedDataParallelPlugin` for CPU offloading.
## Evaluation
Coming Soon!
## Limitations of `deccan-ai/phi-2-insurance_qa-sft-lora`
* Generate Inaccurate Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Unreliable Responses to Instruction: It may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## License
The model is licensed under the [MIT license](https://huggingface.co/deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16/blob/main/LICENSE).
## Citations
[1] <a id="1" href="https://arxiv.org/abs/2106.09685">Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).</a></br>
[2] <a id="2" href="https://ieeexplore.ieee.org/abstract/document/7404872/">Feng, Minwei, et al. "Applying deep learning to answer selection: A study and an open task." 2015 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, 2015.</a> |
saad7489/segformer-b2-finetuned-segments-sidewalk24 | saad7489 | 2024-10-21T07:50:09Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T07:45:03Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b2-finetuned-segments-sidewalk24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-finetuned-segments-sidewalk24
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf | RichardErkhov | 2024-10-21T07:46:44Z | 50 | 1 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T07:20:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1 - GGUF
- Model creator: https://huggingface.co/zaghloul2012/
- Original model: https://huggingface.co/zaghloul2012/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/zaghloul2012_-_llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1-gguf/blob/main/llama-3.2-3b-it-ChatBot-Mode-Enhancer-v1.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf | RichardErkhov | 2024-10-21T07:39:11Z | 12 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T07:07:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLaMA3.2-3B-SFT - GGUF
- Model creator: https://huggingface.co/RLHFlow/
- Original model: https://huggingface.co/RLHFlow/LLaMA3.2-3B-SFT/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [LLaMA3.2-3B-SFT.Q2_K.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q2_K.gguf) | Q2_K | 1.27GB |
| [LLaMA3.2-3B-SFT.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [LLaMA3.2-3B-SFT.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [LLaMA3.2-3B-SFT.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [LLaMA3.2-3B-SFT.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [LLaMA3.2-3B-SFT.Q3_K.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q3_K.gguf) | Q3_K | 1.57GB |
| [LLaMA3.2-3B-SFT.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [LLaMA3.2-3B-SFT.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [LLaMA3.2-3B-SFT.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [LLaMA3.2-3B-SFT.Q4_0.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q4_0.gguf) | Q4_0 | 1.79GB |
| [LLaMA3.2-3B-SFT.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [LLaMA3.2-3B-SFT.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [LLaMA3.2-3B-SFT.Q4_K.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q4_K.gguf) | Q4_K | 1.88GB |
| [LLaMA3.2-3B-SFT.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [LLaMA3.2-3B-SFT.Q4_1.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q4_1.gguf) | Q4_1 | 1.95GB |
| [LLaMA3.2-3B-SFT.Q5_0.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q5_0.gguf) | Q5_0 | 2.11GB |
| [LLaMA3.2-3B-SFT.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [LLaMA3.2-3B-SFT.Q5_K.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q5_K.gguf) | Q5_K | 2.16GB |
| [LLaMA3.2-3B-SFT.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [LLaMA3.2-3B-SFT.Q5_1.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q5_1.gguf) | Q5_1 | 2.28GB |
| [LLaMA3.2-3B-SFT.Q6_K.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q6_K.gguf) | Q6_K | 2.46GB |
| [LLaMA3.2-3B-SFT.Q8_0.gguf](https://huggingface.co/RichardErkhov/RLHFlow_-_LLaMA3.2-3B-SFT-gguf/blob/main/LLaMA3.2-3B-SFT.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
QuantFactory/writing-roleplay-20k-context-nemo-12b-v1.0-GGUF | QuantFactory | 2024-10-21T07:38:15Z | 296 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T05:50:20Z |
---
pipeline_tag: text-generation
base_model: writing-roleplay-20k-context-nemo-12b-v1.0
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/writing-roleplay-20k-context-nemo-12b-v1.0-GGUF
This is quantized version of [openerotica/writing-roleplay-20k-context-nemo-12b-v1.0](https://huggingface.co/openerotica/writing-roleplay-20k-context-nemo-12b-v1.0) created using llama.cpp
# Original Model Card
If you like this model, consider joining my discord to provide feedback: https://discord.gg/QXdn8hWSkY
This is a storywriting and roleplay model with a significant amount of self generated long context multiturn roleplay.
I downloaded a bit under a thousand cards from chub.ai, and created a synthetic roleplay for each card. I batched as many turns as I could in 4k token chunks in order to maintain coherency over longer context. There was a lot of cleaning and validation between each batch, so a lot of examples were "lost," but the final output seems to be very good quality. The longest conversation is about 20k tokens, and I plan to extend this further as well as broaden the dataset with more examples. The first 4k tokens were generated with Command-R-Plus, with the remainder generated with byroneverson/Mistral-Small-Instruct-2409-abliterated.
Next, I downloaded the prompt backup from this site, and used them as a seed for some storywriting data:
https://aetherroom.club/whats-new#backup-update
I went over it twice with Command-R-Plus. The first time, having it basically write the first draft of the output, the second improving and extending the length of the original output.
Also included was a subset of the following datasets:
anthracite-org/stheno-filtered-v1.1
anthracite-org/kalo_misc_part2
anthracite-org/kalo_opus_misc_240827
anthracite-org/kalo-opus-instruct-22k-no-refusal
Chaser-cz/sonnet35-charcard-roleplay-sharegpt
(A very small subset) jondurbin/airoboros-3.2
And some various other data, viewable at openerotica/mixed-rp
Every line of data was run through a large model in order to filter for low quality, repetition, and underage content.
There is a LOT more I can do to improve the dataset used to create this model. As of now, the storywriting data is all single shot, where as I'd like to create some examples of "contunue the story in x direction..." and "rewrite it adding/removing these elements". More stable diffusion roleplay prompting data could also be helpful. Roleplay conversations should be exteneded to 32k and beyond.
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
```yaml
base_model: mistralai/Mistral-Nemo-Base-2407
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: openerotica/mixed-rp
type: sharegpt
conversation: chatml
chat_template: chatml
adapter: qlora
lora_r: 128
lora_alpha: 256
lora_modules_to_save: [embed_tokens, lm_head]
lora_dropout: 0.05
lora_target_linear: true
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
dataset_prepared_path:
val_set_size: 0.01
output_dir: /workspace/axolotl/mixed-rp-mistral-nemo
sequence_len: 20000
sample_packing: true
pad_to_sequence_len: true
wandb_project: mistral-2
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
save_total_limit: 2
save_steps:
debug:
deepspeed:
weight_decay: 0.1
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<pad>"
bos_token: "<s>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: true
# fsdp_use_orig_params: false
# fsdp_cpu_ram_efficient_loading: true
# fsdp_transformer_layer_cls_to_wrap: MixtralSparseMoeBlock
# fsdp_state_dict_type: FULL_STATE_DICT
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: false
# fsdp_backward_prefetch: BACKWARD_PRE
|
nonetrix/llama-3.1-70B-lumitron-lorablated | nonetrix | 2024-10-21T07:33:40Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:mlabonne/Llama-3-70B-Instruct-abliterated-LORA",
"base_model:merge:mlabonne/Llama-3-70B-Instruct-abliterated-LORA",
"base_model:nonetrix/llama-3.1-70B-lumitron",
"base_model:merge:nonetrix/llama-3.1-70B-lumitron",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T06:05:19Z | ---
base_model:
- nonetrix/llama-3.1-70B-lumitron
- mlabonne/Llama-3-70B-Instruct-abliterated-LORA
library_name: transformers
tags:
- mergekit
- merge
---
# llama-3.1-70B-lumitron-lorablated
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [nonetrix/llama-3.1-70B-lumitron](https://huggingface.co/nonetrix/llama-3.1-70B-lumitron) + [mlabonne/Llama-3-70B-Instruct-abliterated-LORA](https://huggingface.co/mlabonne/Llama-3-70B-Instruct-abliterated-LORA) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: nonetrix/llama-3.1-70B-lumitron+mlabonne/Llama-3-70B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 80]
model: nonetrix/llama-3.1-70B-lumitron+mlabonne/Llama-3-70B-Instruct-abliterated-LORA
parameters:
weight: 1.0
```
|
mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF | mradermacher | 2024-10-21T07:33:21Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated",
"base_model:quantized:nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-19T09:17:33Z | ---
base_model: nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q3_K_S.gguf) | Q3_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q3_K_M.gguf) | Q3_K_M | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q3_K_L.gguf) | Q3_K_L | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.IQ4_XS.gguf) | IQ4_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q4_K_S.gguf) | Q4_K_S | 6.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q4_K_M.gguf) | Q4_K_M | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q5_K_S.gguf) | Q5_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q5_K_M.gguf) | Q5_K_M | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q6_K.gguf) | Q6_K | 8.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.Q8_0.gguf) | Q8_0 | 10.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF/resolve/main/Mahou-1.5-mistral-nemo-12B-lorablated.f16.gguf) | f16 | 20.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
byroneverson/LongWriter-glm4-9b-abliterated | byroneverson | 2024-10-21T07:31:10Z | 22 | 7 | transformers | [
"transformers",
"safetensors",
"chatglm",
"feature-extraction",
"llm",
"glm",
"glm4",
"llama",
"chat",
"instruct",
"it",
"abliterated",
"longwriter",
"long context",
"text-generation",
"conversational",
"custom_code",
"en",
"base_model:THUDM/LongWriter-glm4-9b",
"base_model:finetune:THUDM/LongWriter-glm4-9b",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-10-17T11:26:58Z | ---
base_model: THUDM/LongWriter-glm4-9b
license: apache-2.0
pipeline_tag: text-generation
language:
- en
tags:
- llm
- glm
- glm4
- chatglm
- llama
- chat
- instruct
- it
- abliterated
- longwriter
- long context
library_name: transformers
---
# LongWriter-glm4-9b-abliterated
## Now accepting abliteration requests. If you would like to see a model abliterated, follow me and leave me a message with model link.
Check out the <a href="https://huggingface.co/byroneverson/LongWriter-glm4-9b-abliterated/blob/main/abliterate-LongWriter-glm4-9b.ipynb">jupyter notebook</a> for details of how this model was abliterated.

|
daisd-ai/anydef-orpo | daisd-ai | 2024-10-21T07:30:19Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"entity linking",
"conversational",
"dataset:daisd-ai/anydef-kilt-tasks",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-19T18:13:54Z | ---
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- trl
- orpo
- generated_from_trainer
- trl
- orpo
- generated_from_trainer
- entity linking
datasets:
- daisd-ai/anydef-kilt-tasks
model-index:
- name: anydef-orpo
results: []
license: apache-2.0
inference: false
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anydef-orpo
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the daisd-ai/anydef-kilt-tasks dataset.
Find out about Model description, Intended uses & limitations and Training and evaluation data on our [github](https://github.com/daisd-ai/universal-el).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf | RichardErkhov | 2024-10-21T07:25:21Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:55:32Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.25-0.5_bon_iter2
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.25-0.5_bon_iter2](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.25-0.5_bon_iter2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/0fvo71ux)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf | RichardErkhov | 2024-10-21T07:24:17Z | 69 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:56:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
L3.2-JametMini-3B-MK.I - GGUF
- Model creator: https://huggingface.co/Hastagaras/
- Original model: https://huggingface.co/Hastagaras/L3.2-JametMini-3B-MK.I/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [L3.2-JametMini-3B-MK.I.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q2_K.gguf) | Q2_K | 1.27GB |
| [L3.2-JametMini-3B-MK.I.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [L3.2-JametMini-3B-MK.I.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [L3.2-JametMini-3B-MK.I.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [L3.2-JametMini-3B-MK.I.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [L3.2-JametMini-3B-MK.I.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q3_K.gguf) | Q3_K | 1.57GB |
| [L3.2-JametMini-3B-MK.I.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [L3.2-JametMini-3B-MK.I.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [L3.2-JametMini-3B-MK.I.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [L3.2-JametMini-3B-MK.I.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q4_0.gguf) | Q4_0 | 1.79GB |
| [L3.2-JametMini-3B-MK.I.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [L3.2-JametMini-3B-MK.I.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [L3.2-JametMini-3B-MK.I.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q4_K.gguf) | Q4_K | 1.88GB |
| [L3.2-JametMini-3B-MK.I.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [L3.2-JametMini-3B-MK.I.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q4_1.gguf) | Q4_1 | 1.95GB |
| [L3.2-JametMini-3B-MK.I.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q5_0.gguf) | Q5_0 | 2.11GB |
| [L3.2-JametMini-3B-MK.I.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [L3.2-JametMini-3B-MK.I.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q5_K.gguf) | Q5_K | 2.16GB |
| [L3.2-JametMini-3B-MK.I.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [L3.2-JametMini-3B-MK.I.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q5_1.gguf) | Q5_1 | 2.28GB |
| [L3.2-JametMini-3B-MK.I.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q6_K.gguf) | Q6_K | 2.46GB |
| [L3.2-JametMini-3B-MK.I.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hastagaras_-_L3.2-JametMini-3B-MK.I-gguf/blob/main/L3.2-JametMini-3B-MK.I.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
license: llama3.2
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
Jamet, but smol
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf | RichardErkhov | 2024-10-21T07:23:59Z | 91 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:52:38Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.5-0.75_bon_iter3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0.75-1_bon_iter4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/alko0rmp)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
vocabtrimmer/chinese-roberta-wwm-ext.xnli-zh.5 | vocabtrimmer | 2024-10-21T07:15:56Z | 6 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2024-10-21T07:15:37Z | # `vocabtrimmer/chinese-roberta-wwm-ext.xnli-zh.5`
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on the
[xnli](https://huggingface.co/datasets/xnli) (zh).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(zh).
* Evaluation on test split
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 77.37 | 77.37 | 77.37 | 77.32 | 77.37 | 78.06 | 77.37 |
* Evaluation on validation split
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 78.71 | 78.71 | 78.71 | 78.71 | 78.71 | 79.34 | 78.71 |
Check the result file [here](https://huggingface.co/vocabtrimmer/chinese-roberta-wwm-ext.xnli-zh.5/raw/main/eval.json). |
RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf | RichardErkhov | 2024-10-21T07:14:43Z | 15 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:47:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6 - GGUF
- Model creator: https://huggingface.co/SongTonyLi/
- Original model: https://huggingface.co/SongTonyLi/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q2_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q6_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q8_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix6.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
FareedKhan/all_datasets_v4_MiniLM-L6_q4 | FareedKhan | 2024-10-21T07:12:01Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | sentence-similarity | 2024-10-21T07:11:58Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("FareedKhan/all_datasets_v4_MiniLM-L6_q4")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets:
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Shakker-Labs/FLUX.1-dev-LoRA-Text-Poster | Shakker-Labs | 2024-10-21T07:05:38Z | 307 | 32 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"image-generation",
"flux",
"safetensors",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-21T06:54:33Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- image-generation
- flux
- safetensors
widget:
- text: Text poster, a couple
output:
url: images/6dd1a918d89991ad5e40513ab88e7d892077f89dac93edcf4b660dd2.jpg
- text: Text poster, a woman sitting in a cafe
output:
url: images/d2586464001008a80b5e45104e0f23290a35db048cab2e4fc4bfa356.jpg
- text: Text poster, eiffel tower
output:
url: images/f25e24ecfbd0aa96fb6f55ab29288ba4d1fffe79fd95679d9d2f1329.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: text poster
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# FLUX.1-dev-LoRA-Text-Poster
This is a LoRA (Text Poster) trained on FLUX.1-dev for artistic text poster by [cooooool](https://www.shakker.ai/userpage/c4d790d27e6b4de69f3f3508daf8f4c5/publish). If you are also interested in sharing your models on our platform, welcome to join our [Discord Community](https://huggingface.co/spaces/Shakker-Labs/README/blob/main/(https://discord.gg/5TuxSjJya6)).
<div class="container">
<img src="./poster.jpeg" width="1024"/>
</div>
## Showcases
<Gallery />
## Trigger words
You should use `text poster` to trigger the image generation. The recommended scale is `0.8` to `1.0` in diffusers.
## Online Inference
You can also download this model at [Shakker AI](https://www.shakker.ai/modelinfo/579ab130b53246fea49811bf80d38486/FLUX-text-poster?from=search), where we provide an online interface to generate images.
## Acknowledgements
This model is trained by our copyrighted users [cooooool](https://www.shakker.ai/userpage/c4d790d27e6b4de69f3f3508daf8f4c5/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf | RichardErkhov | 2024-10-21T07:04:51Z | 30 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:37:18Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4 - GGUF
- Model creator: https://huggingface.co/SongTonyLi/
- Original model: https://huggingface.co/SongTonyLi/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q2_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q6_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q8_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix4.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf | RichardErkhov | 2024-10-21T07:04:18Z | 15 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:37:18Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5 - GGUF
- Model creator: https://huggingface.co/SongTonyLi/
- Original model: https://huggingface.co/SongTonyLi/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q2_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q6_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q8_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix5.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf | RichardErkhov | 2024-10-21T07:04:11Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:32:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_bon_iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/mb2zurot)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
FareedKhan/all_mini_me | FareedKhan | 2024-10-21T07:04:08Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | sentence-similarity | 2024-10-21T07:03:56Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("FareedKhan/all_mini_me")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets:
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
saad7489/segformer-b2-finetuned-segments-sidewalks-23 | saad7489 | 2024-10-21T07:01:00Z | 32 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T06:43:30Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b2-finetuned-segments-sidewalks-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-finetuned-segments-sidewalks-23
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
VTSNLP/base_LLM_RAW_400M_final | VTSNLP | 2024-10-21T06:54:42Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"mpt",
"generated_from_trainer",
"region:us"
] | null | 2024-10-21T02:08:52Z | ---
tags:
- generated_from_trainer
model-index:
- name: base_LLM_RAW_400M_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base_LLM_RAW_400M_final
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.6e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
VTSNLP/base_LLM_CAC_400M_final | VTSNLP | 2024-10-21T06:53:49Z | 6 | 0 | null | [
"tensorboard",
"safetensors",
"mpt",
"generated_from_trainer",
"base_model:wanderer2k1/base_LLM_433MB_concat_and_chunk",
"base_model:finetune:wanderer2k1/base_LLM_433MB_concat_and_chunk",
"region:us"
] | null | 2024-10-21T02:04:18Z | ---
base_model: wanderer2k1/base_LLM_433MB_concat_and_chunk
tags:
- generated_from_trainer
model-index:
- name: base_LLM_CAC_400M_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base_LLM_CAC_400M_final
This model is a fine-tuned version of [wanderer2k1/base_LLM_433MB_concat_and_chunk](https://huggingface.co/wanderer2k1/base_LLM_433MB_concat_and_chunk) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.6e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
VTSNLP/base_LLM_CAC_200M_final | VTSNLP | 2024-10-21T06:52:42Z | 9 | 1 | null | [
"tensorboard",
"safetensors",
"mpt",
"generated_from_trainer",
"base_model:wanderer2k1/base_LLM_183MB_concat_and_chunk",
"base_model:finetune:wanderer2k1/base_LLM_183MB_concat_and_chunk",
"region:us"
] | null | 2024-10-21T02:01:38Z | ---
base_model: wanderer2k1/base_LLM_183MB_concat_and_chunk
tags:
- generated_from_trainer
model-index:
- name: base_LLM_CAC_200M_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base_LLM_CAC_200M_final
This model is a fine-tuned version of [wanderer2k1/base_LLM_183MB_concat_and_chunk](https://huggingface.co/wanderer2k1/base_LLM_183MB_concat_and_chunk) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.6e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf | RichardErkhov | 2024-10-21T06:49:39Z | 108 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:21:33Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_MATH_0-0.25_bon_iter1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/r32vk2yv)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
abhinand/MedEmbed-base-v0.1 | abhinand | 2024-10-21T06:49:35Z | 1,088 | 13 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"medembed",
"medical-embedding",
"clinical-embedding",
"information-retrieval",
"en",
"dataset:MedicalQARetrieval",
"dataset:NFCorpus",
"dataset:PublicHealthQA",
"dataset:TRECCOVID",
"dataset:ArguAna",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"region:us"
] | null | 2024-10-20T11:46:20Z | ---
language: en
tags:
- medembed
- medical-embedding
- clinical-embedding
- information-retrieval
- sentence-transformers
license: apache-2.0
datasets:
- MedicalQARetrieval
- NFCorpus
- PublicHealthQA
- TRECCOVID
- ArguAna
metrics:
- nDCG
- MAP
- Recall
- Precision
- MRR
base_model:
- BAAI/bge-base-en-v1.5
---
# MedEmbed: Specialized Embedding Model for Medical and Clinical Information Retrieval

## Model Description
MedEmbed is a family of embedding models fine-tuned specifically for medical and clinical data, designed to enhance performance in healthcare-related natural language processing (NLP) tasks, particularly information retrieval.
**GitHub Repo:** [https://github.com/abhinand5/MedEmbed](https://github.com/abhinand5/MedEmbed)
**Technical Blog Post:** [https://huggingface.co/blog/abhinand/medembed-finetuned-embedding-models-for-medical-ir](https://huggingface.co/blog/abhinand/medembed-finetuned-embedding-models-for-medical-ir)
## Intended Use
This model is intended for use in medical and clinical contexts to improve information retrieval, question answering, and semantic search tasks. It can be integrated into healthcare systems, research tools, and medical literature databases to enhance search capabilities and information access.
## Training Data

The model was trained using a simple yet effective synthetic data generation pipeline:
1. Source: Clinical notes from PubMed Central (PMC)
2. Processing: [LLaMA 3.1 70B](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) model used to generate query-response pairs
3. Augmentation: Negative sampling for challenging examples
4. Format: Triplets (query, positive response, negative response) for contrastive learning
## Performance
MedEmbed consistently outperforms general-purpose embedding models across various medical NLP benchmarks:
- ArguAna
- MedicalQARetrieval
- NFCorpus
- PublicHealthQA
- TRECCOVID
Specific performance metrics (nDCG, MAP, Recall, Precision, MRR) are available in the full documentation.
## Limitations
While highly effective for medical and clinical data, this model may not generalize well to non-medical domains. It should be used with caution in general-purpose NLP tasks.
## Ethical Considerations
Users should be aware of potential biases in medical data and the ethical implications of AI in healthcare. This model should be used as a tool to assist, not replace, human expertise in medical decision-making.
## Citation
If you use this model in your research, please cite:
```bibtex
@software{balachandran2024medembed,
author = {Balachandran, Abhinand},
title = {MedEmbed: Medical-Focused Embedding Models},
year = {2024},
url = {https://github.com/abhinand5/MedEmbed}
}
```
For more detailed information, visit our GitHub repository. |
zemaia/e500_lr2e-05 | zemaia | 2024-10-21T06:49:25Z | 196 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:adalbertojunior/distilbert-portuguese-cased",
"base_model:finetune:adalbertojunior/distilbert-portuguese-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-10-17T17:49:14Z | ---
library_name: transformers
base_model: adalbertojunior/distilbert-portuguese-cased
tags:
- generated_from_trainer
model-index:
- name: e500_lr2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e500_lr2e-05
This model is a fine-tuned version of [adalbertojunior/distilbert-portuguese-cased](https://huggingface.co/adalbertojunior/distilbert-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 200
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:-----:|:---------------:|
| 6.7563 | 1.6949 | 100 | 5.4137 |
| 5.0553 | 3.3898 | 200 | 4.4824 |
| 4.3687 | 5.0847 | 300 | 3.9332 |
| 3.9319 | 6.7797 | 400 | 3.5644 |
| 3.6101 | 8.4746 | 500 | 3.2889 |
| 3.3843 | 10.1695 | 600 | 3.0760 |
| 3.1869 | 11.8644 | 700 | 2.9195 |
| 3.0395 | 13.5593 | 800 | 2.7842 |
| 2.9038 | 15.2542 | 900 | 2.6563 |
| 2.7768 | 16.9492 | 1000 | 2.5554 |
| 2.6835 | 18.6441 | 1100 | 2.4614 |
| 2.5903 | 20.3390 | 1200 | 2.3882 |
| 2.5214 | 22.0339 | 1300 | 2.3210 |
| 2.4401 | 23.7288 | 1400 | 2.2352 |
| 2.373 | 25.4237 | 1500 | 2.2145 |
| 2.3147 | 27.1186 | 1600 | 2.1609 |
| 2.2606 | 28.8136 | 1700 | 2.0704 |
| 2.2064 | 30.5085 | 1800 | 2.0260 |
| 2.1572 | 32.2034 | 1900 | 2.0259 |
| 2.1258 | 33.8983 | 2000 | 1.9498 |
| 2.0683 | 35.5932 | 2100 | 1.9212 |
| 2.0374 | 37.2881 | 2200 | 1.8884 |
| 1.9998 | 38.9831 | 2300 | 1.8543 |
| 1.9582 | 40.6780 | 2400 | 1.8106 |
| 1.932 | 42.3729 | 2500 | 1.7822 |
| 1.8862 | 44.0678 | 2600 | 1.7673 |
| 1.8677 | 45.7627 | 2700 | 1.7280 |
| 1.8375 | 47.4576 | 2800 | 1.7147 |
| 1.8128 | 49.1525 | 2900 | 1.6882 |
| 1.7874 | 50.8475 | 3000 | 1.6357 |
| 1.7628 | 52.5424 | 3100 | 1.6502 |
| 1.7391 | 54.2373 | 3200 | 1.6312 |
| 1.709 | 55.9322 | 3300 | 1.5989 |
| 1.6878 | 57.6271 | 3400 | 1.5503 |
| 1.6605 | 59.3220 | 3500 | 1.5602 |
| 1.6331 | 61.0169 | 3600 | 1.5486 |
| 1.6206 | 62.7119 | 3700 | 1.5046 |
| 1.6057 | 64.4068 | 3800 | 1.5098 |
| 1.5877 | 66.1017 | 3900 | 1.4885 |
| 1.5576 | 67.7966 | 4000 | 1.4747 |
| 1.5413 | 69.4915 | 4100 | 1.4500 |
| 1.5142 | 71.1864 | 4200 | 1.3917 |
| 1.4847 | 72.8814 | 4300 | 1.3771 |
| 1.4665 | 74.5763 | 4400 | 1.3737 |
| 1.4562 | 76.2712 | 4500 | 1.3560 |
| 1.4422 | 77.9661 | 4600 | 1.3394 |
| 1.4148 | 79.6610 | 4700 | 1.3453 |
| 1.4108 | 81.3559 | 4800 | 1.3261 |
| 1.3992 | 83.0508 | 4900 | 1.3111 |
| 1.3784 | 84.7458 | 5000 | 1.3083 |
| 1.3607 | 86.4407 | 5100 | 1.2982 |
| 1.352 | 88.1356 | 5200 | 1.2758 |
| 1.3353 | 89.8305 | 5300 | 1.2818 |
| 1.3173 | 91.5254 | 5400 | 1.2697 |
| 1.3085 | 93.2203 | 5500 | 1.2440 |
| 1.2955 | 94.9153 | 5600 | 1.2099 |
| 1.2933 | 96.6102 | 5700 | 1.2337 |
| 1.2757 | 98.3051 | 5800 | 1.2056 |
| 1.262 | 100.0 | 5900 | 1.1993 |
| 1.2509 | 101.6949 | 6000 | 1.1933 |
| 1.2418 | 103.3898 | 6100 | 1.1645 |
| 1.2275 | 105.0847 | 6200 | 1.1820 |
| 1.2219 | 106.7797 | 6300 | 1.1452 |
| 1.216 | 108.4746 | 6400 | 1.1709 |
| 1.1954 | 110.1695 | 6500 | 1.1386 |
| 1.1858 | 111.8644 | 6600 | 1.1336 |
| 1.1799 | 113.5593 | 6700 | 1.1217 |
| 1.1707 | 115.2542 | 6800 | 1.1102 |
| 1.1653 | 116.9492 | 6900 | 1.1093 |
| 1.1476 | 118.6441 | 7000 | 1.1032 |
| 1.1406 | 120.3390 | 7100 | 1.1004 |
| 1.1364 | 122.0339 | 7200 | 1.0698 |
| 1.1173 | 123.7288 | 7300 | 1.0817 |
| 1.1129 | 125.4237 | 7400 | 1.0825 |
| 1.1077 | 127.1186 | 7500 | 1.0728 |
| 1.0943 | 128.8136 | 7600 | 1.0496 |
| 1.0881 | 130.5085 | 7700 | 1.0443 |
| 1.0774 | 132.2034 | 7800 | 1.0392 |
| 1.0789 | 133.8983 | 7900 | 1.0470 |
| 1.0608 | 135.5932 | 8000 | 1.0248 |
| 1.0516 | 137.2881 | 8100 | 1.0144 |
| 1.0533 | 138.9831 | 8200 | 1.0246 |
| 1.0401 | 140.6780 | 8300 | 1.0180 |
| 1.0347 | 142.3729 | 8400 | 0.9903 |
| 1.0268 | 144.0678 | 8500 | 0.9809 |
| 1.016 | 145.7627 | 8600 | 0.9839 |
| 1.003 | 147.4576 | 8700 | 0.9870 |
| 1.0066 | 149.1525 | 8800 | 0.9610 |
| 1.004 | 150.8475 | 8900 | 0.9488 |
| 0.9918 | 152.5424 | 9000 | 0.9601 |
| 0.996 | 154.2373 | 9100 | 0.9660 |
| 0.9835 | 155.9322 | 9200 | 0.9376 |
| 0.9801 | 157.6271 | 9300 | 0.9504 |
| 0.9606 | 159.3220 | 9400 | 0.9482 |
| 0.9646 | 161.0169 | 9500 | 0.9312 |
| 0.9637 | 162.7119 | 9600 | 0.9304 |
| 0.9528 | 164.4068 | 9700 | 0.9270 |
| 0.9432 | 166.1017 | 9800 | 0.9205 |
| 0.9398 | 167.7966 | 9900 | 0.9202 |
| 0.9377 | 169.4915 | 10000 | 0.9167 |
| 0.9282 | 171.1864 | 10100 | 0.9122 |
| 0.9118 | 172.8814 | 10200 | 0.9034 |
| 0.907 | 174.5763 | 10300 | 0.8839 |
| 0.9152 | 176.2712 | 10400 | 0.8879 |
| 0.9124 | 177.9661 | 10500 | 0.8885 |
| 0.9005 | 179.6610 | 10600 | 0.8832 |
| 0.8979 | 181.3559 | 10700 | 0.8767 |
| 0.8836 | 183.0508 | 10800 | 0.8886 |
| 0.882 | 184.7458 | 10900 | 0.8601 |
| 0.8818 | 186.4407 | 11000 | 0.8713 |
| 0.8724 | 188.1356 | 11100 | 0.8602 |
| 0.8688 | 189.8305 | 11200 | 0.8510 |
| 0.8677 | 191.5254 | 11300 | 0.8401 |
| 0.8643 | 193.2203 | 11400 | 0.8453 |
| 0.8638 | 194.9153 | 11500 | 0.8351 |
| 0.8539 | 196.6102 | 11600 | 0.8460 |
| 0.852 | 198.3051 | 11700 | 0.8474 |
| 0.8433 | 200.0 | 11800 | 0.8249 |
| 0.8394 | 201.6949 | 11900 | 0.8326 |
| 0.8339 | 203.3898 | 12000 | 0.8331 |
| 0.8284 | 205.0847 | 12100 | 0.8216 |
| 0.8284 | 206.7797 | 12200 | 0.8148 |
| 0.8261 | 208.4746 | 12300 | 0.8020 |
| 0.8158 | 210.1695 | 12400 | 0.8112 |
| 0.8148 | 211.8644 | 12500 | 0.8154 |
| 0.8118 | 213.5593 | 12600 | 0.8058 |
| 0.8067 | 215.2542 | 12700 | 0.8005 |
| 0.8022 | 216.9492 | 12800 | 0.8021 |
| 0.793 | 218.6441 | 12900 | 0.8000 |
| 0.8003 | 220.3390 | 13000 | 0.7924 |
| 0.7891 | 222.0339 | 13100 | 0.7891 |
| 0.7802 | 223.7288 | 13200 | 0.7678 |
| 0.7906 | 225.4237 | 13300 | 0.7902 |
| 0.7756 | 227.1186 | 13400 | 0.7774 |
| 0.7788 | 228.8136 | 13500 | 0.7639 |
| 0.7654 | 230.5085 | 13600 | 0.7767 |
| 0.7686 | 232.2034 | 13700 | 0.7831 |
| 0.7691 | 233.8983 | 13800 | 0.7735 |
| 0.7656 | 235.5932 | 13900 | 0.7632 |
| 0.7597 | 237.2881 | 14000 | 0.7694 |
| 0.7562 | 238.9831 | 14100 | 0.7475 |
| 0.754 | 240.6780 | 14200 | 0.7585 |
| 0.7461 | 242.3729 | 14300 | 0.7502 |
| 0.749 | 244.0678 | 14400 | 0.7533 |
| 0.7482 | 245.7627 | 14500 | 0.7308 |
| 0.7436 | 247.4576 | 14600 | 0.7581 |
| 0.7395 | 249.1525 | 14700 | 0.7118 |
| 0.7339 | 250.8475 | 14800 | 0.7458 |
| 0.7337 | 252.5424 | 14900 | 0.7232 |
| 0.7262 | 254.2373 | 15000 | 0.7421 |
| 0.7313 | 255.9322 | 15100 | 0.7097 |
| 0.7223 | 257.6271 | 15200 | 0.7235 |
| 0.7189 | 259.3220 | 15300 | 0.7222 |
| 0.7228 | 261.0169 | 15400 | 0.7373 |
| 0.7163 | 262.7119 | 15500 | 0.7247 |
| 0.7102 | 264.4068 | 15600 | 0.7255 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf | RichardErkhov | 2024-10-21T06:47:01Z | 14 | 0 | null | [
"gguf",
"arxiv:2204.05149",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T06:16:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-chatml - GGUF
- Model creator: https://huggingface.co/IntervitensInc/
- Original model: https://huggingface.co/IntervitensInc/Llama-3.2-3B-Instruct-chatml/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-chatml.Q2_K.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct-chatml.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama-3.2-3B-Instruct-chatml.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama-3.2-3B-Instruct-chatml.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct-chatml.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama-3.2-3B-Instruct-chatml.Q3_K.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct-chatml.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct-chatml.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct-chatml.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct-chatml.Q4_0.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct-chatml.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct-chatml.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct-chatml.Q4_K.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct-chatml.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct-chatml.Q4_1.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct-chatml.Q5_0.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct-chatml.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct-chatml.Q5_K.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct-chatml.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct-chatml.Q5_1.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct-chatml.Q6_K.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct-chatml.Q8_0.gguf](https://huggingface.co/RichardErkhov/IntervitensInc_-_Llama-3.2-3B-Instruct-chatml-gguf/blob/main/Llama-3.2-3B-Instruct-chatml.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.2
extra_gated_prompt: >-
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
“Agreement” means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
distributed by Meta at https://llama.meta.com/doc/overview.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are
entering into this Agreement on such person or entity’s behalf), of the age required under
applicable laws, rules or regulations to provide legal consent and that has legal authority
to bind your employer or such other person or entity if you are entering in this Agreement
on their behalf.
“Llama 3.2” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://www.llama.com/llama-downloads.
“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or Switzerland)
and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof),
or a product or service (including another AI model) that contains any of them, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
on a related website, user interface, blogpost, about page, or product documentation. If you use the
Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the
following attribution notice within a “Notice” text file distributed as a part of such copies:
“Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
incorporated by reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates,
is greater than 700 million monthly active users in the preceding calendar month, you must request
a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
California without regard to choice of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
any dispute arising out of this Agreement.
### Llama 3.2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”).
The most recent copy of this policy can be found at
[https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
#### Prohibited Uses
We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
9. Guns and illegal weapons (including weapon development)
10. Illegal drugs and regulated/controlled substances
11. Operation of critical infrastructure, transportation technologies, or heavy machinery
12. Self-harm or harm to others, including suicide, cutting, and eating disorders
13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
16. Generating, promoting, or further distributing spam
17. Impersonating another individual without consent, authorization, or legal right
18. Representing that the use of Llama 3.2 or outputs are human-generated
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-3B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Total | 830k | 86k | | 240 | 0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
| Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
| | | Spanish | 41.5 | 55.1 | 62.5 |
| | | Italian | 39.8 | 53.8 | 61.6 |
| | | German | 39.2 | 53.3 | 60.6 |
| | | French | 40.5 | 54.6 | 62.3 |
| | | Hindi | 33.5 | 43.3 | 50.9 |
| | | Thai | 34.7 | 44.5 | 50.3 |
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
Niha14/speecht5_finetuned_techincal_data_V2 | Niha14 | 2024-10-21T06:44:11Z | 74 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-21T06:27:39Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_techincal_data_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_techincal_data_V2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5483 | 5.7143 | 100 | 0.5141 |
| 0.4836 | 11.4286 | 200 | 0.5088 |
| 0.4598 | 17.1429 | 300 | 0.5061 |
| 0.4364 | 22.8571 | 400 | 0.5245 |
| 0.4275 | 28.5714 | 500 | 0.5229 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Ayush12a/llama3.1_finetuned_on_indian_legal_dataset | Ayush12a | 2024-10-21T06:31:32Z | 158 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-21T06:29:56Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Ayush12a
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ariffiq99/Stacked_model_60 | Ariffiq99 | 2024-10-21T06:30:42Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-10-21T04:56:10Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Stacked_model_60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Stacked_model_60
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8114
- F1: 0.7309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1179 | 1.0 | 946 | 0.8726 | 0.6862 |
| 0.794 | 2.0 | 1892 | 0.7980 | 0.7166 |
| 0.6979 | 3.0 | 2838 | 0.7882 | 0.7202 |
| 0.6325 | 4.0 | 3784 | 0.7913 | 0.7277 |
| 0.5896 | 5.0 | 4730 | 0.8114 | 0.7309 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf | RichardErkhov | 2024-10-21T06:30:04Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T05:59:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3.2-3B-Esper2 - GGUF
- Model creator: https://huggingface.co/ValiantLabs/
- Original model: https://huggingface.co/ValiantLabs/Llama3.2-3B-Esper2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama3.2-3B-Esper2.Q2_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama3.2-3B-Esper2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama3.2-3B-Esper2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama3.2-3B-Esper2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama3.2-3B-Esper2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama3.2-3B-Esper2.Q3_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama3.2-3B-Esper2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama3.2-3B-Esper2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama3.2-3B-Esper2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama3.2-3B-Esper2.Q4_0.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama3.2-3B-Esper2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama3.2-3B-Esper2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama3.2-3B-Esper2.Q4_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama3.2-3B-Esper2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama3.2-3B-Esper2.Q4_1.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama3.2-3B-Esper2.Q5_0.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama3.2-3B-Esper2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama3.2-3B-Esper2.Q5_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama3.2-3B-Esper2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama3.2-3B-Esper2.Q5_1.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama3.2-3B-Esper2.Q6_K.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama3.2-3B-Esper2.Q8_0.gguf](https://huggingface.co/RichardErkhov/ValiantLabs_-_Llama3.2-3B-Esper2-gguf/blob/main/Llama3.2-3B-Esper2.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- esper
- esper-2
- valiant
- valiant-labs
- llama
- llama-3.2
- llama-3.2-instruct
- llama-3.2-instruct-3b
- llama-3
- llama-3-instruct
- llama-3-instruct-3b
- 3b
- code
- code-instruct
- python
- dev-ops
- terraform
- azure
- aws
- gcp
- architect
- engineer
- developer
- conversational
- chat
- instruct
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets:
- sequelbox/Titanium
- sequelbox/Tachibana
- sequelbox/Supernova
model-index:
- name: ValiantLabs/Llama3.2-3B-Esper2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.27
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: ARC Challenge (25-Shot)
type: arc-challenge
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 43.17
name: normalized accuracy
model_type: llama
license: llama3.2
---

Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.2 3b.
- Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more!
- Real world problem solving and high quality code instruct performance within the Llama 3.2 Instruct chat format
- Finetuned on synthetic [DevOps-instruct](https://huggingface.co/datasets/sequelbox/Titanium) and [code-instruct](https://huggingface.co/datasets/sequelbox/Tachibana) data generated with Llama 3.1 405b.
- Overall chat performance supplemented with [generalist chat data.](https://huggingface.co/datasets/sequelbox/Supernova)
Try our code-instruct AI assistant [Enigma!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma)
## Version
This is the **2024-10-03** release of Esper 2 for Llama 3.2 3b.
Esper 2 is also available for [Llama 3.1 8b!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Esper2)
Esper 2 will be coming to more model sizes soon :)
## Prompting Guide
Esper 2 uses the [Llama 3.2 Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
```python
import transformers
import torch
model_id = "ValiantLabs/Llama3.2-3B-Esper2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
```
## The Model
Esper 2 is built on top of Llama 3.2 3b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.2 Instruct prompt style.
Our current version of Esper 2 is trained on DevOps data from [sequelbox/Titanium](https://huggingface.co/datasets/sequelbox/Titanium), supplemented by code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)

Esper 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for Shining Valiant 2, Enigma, and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf | RichardErkhov | 2024-10-21T06:19:45Z | 31 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T05:50:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-abliterated - GGUF
- Model creator: https://huggingface.co/KidIkaros/
- Original model: https://huggingface.co/KidIkaros/Llama-3.2-3B-Instruct-abliterated/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-abliterated.Q2_K.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 1.39GB |
| [Llama-3.2-3B-Instruct-abliterated.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [Llama-3.2-3B-Instruct-abliterated.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [Llama-3.2-3B-Instruct-abliterated.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [Llama-3.2-3B-Instruct-abliterated.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Llama-3.2-3B-Instruct-abliterated.Q3_K.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q3_K.gguf) | Q3_K | 1.73GB |
| [Llama-3.2-3B-Instruct-abliterated.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [Llama-3.2-3B-Instruct-abliterated.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [Llama-3.2-3B-Instruct-abliterated.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [Llama-3.2-3B-Instruct-abliterated.Q4_0.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q4_0.gguf) | Q4_0 | 1.99GB |
| [Llama-3.2-3B-Instruct-abliterated.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [Llama-3.2-3B-Instruct-abliterated.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [Llama-3.2-3B-Instruct-abliterated.Q4_K.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q4_K.gguf) | Q4_K | 2.09GB |
| [Llama-3.2-3B-Instruct-abliterated.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [Llama-3.2-3B-Instruct-abliterated.Q4_1.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q4_1.gguf) | Q4_1 | 2.18GB |
| [Llama-3.2-3B-Instruct-abliterated.Q5_0.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q5_0.gguf) | Q5_0 | 2.37GB |
| [Llama-3.2-3B-Instruct-abliterated.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [Llama-3.2-3B-Instruct-abliterated.Q5_K.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q5_K.gguf) | Q5_K | 2.41GB |
| [Llama-3.2-3B-Instruct-abliterated.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [Llama-3.2-3B-Instruct-abliterated.Q5_1.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q5_1.gguf) | Q5_1 | 2.55GB |
| [Llama-3.2-3B-Instruct-abliterated.Q6_K.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 2.76GB |
| [Llama-3.2-3B-Instruct-abliterated.Q8_0.gguf](https://huggingface.co/RichardErkhov/KidIkaros_-_Llama-3.2-3B-Instruct-abliterated-gguf/blob/main/Llama-3.2-3B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf | RichardErkhov | 2024-10-21T06:16:25Z | 99 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T05:47:00Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_XS.gguf) | IQ3_XS | 1.53GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_S.gguf) | IQ3_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/mkbbxyq2)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
bmichaeljoiner/girlscard01 | bmichaeljoiner | 2024-10-21T06:14:01Z | 14 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-21T06:09:23Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: girlscard01
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# girlscard01
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `girlscard01` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
saad7489/segformer-b2-finetuned-segments-sidewalks-22 | saad7489 | 2024-10-21T06:08:53Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-21T05:54:21Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b2-finetuned-segments-sidewalks-22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-finetuned-segments-sidewalks-22
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf | RichardErkhov | 2024-10-21T06:06:44Z | 30 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T05:38:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Colibri-RAG-Llama-3.2-3B - GGUF
- Model creator: https://huggingface.co/igmochang/
- Original model: https://huggingface.co/igmochang/Colibri-RAG-Llama-3.2-3B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Colibri-RAG-Llama-3.2-3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q2_K.gguf) | Q2_K | 1.27GB |
| [Colibri-RAG-Llama-3.2-3B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Colibri-RAG-Llama-3.2-3B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Colibri-RAG-Llama-3.2-3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Colibri-RAG-Llama-3.2-3B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Colibri-RAG-Llama-3.2-3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q3_K.gguf) | Q3_K | 1.57GB |
| [Colibri-RAG-Llama-3.2-3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Colibri-RAG-Llama-3.2-3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Colibri-RAG-Llama-3.2-3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Colibri-RAG-Llama-3.2-3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Colibri-RAG-Llama-3.2-3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Colibri-RAG-Llama-3.2-3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Colibri-RAG-Llama-3.2-3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q4_K.gguf) | Q4_K | 1.88GB |
| [Colibri-RAG-Llama-3.2-3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Colibri-RAG-Llama-3.2-3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Colibri-RAG-Llama-3.2-3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Colibri-RAG-Llama-3.2-3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Colibri-RAG-Llama-3.2-3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q5_K.gguf) | Q5_K | 2.16GB |
| [Colibri-RAG-Llama-3.2-3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Colibri-RAG-Llama-3.2-3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Colibri-RAG-Llama-3.2-3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q6_K.gguf) | Q6_K | 2.46GB |
| [Colibri-RAG-Llama-3.2-3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-gguf/blob/main/Colibri-RAG-Llama-3.2-3B.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
language:
- es
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# Model Card for Model ID
This model was finetuned on a spanish dataset for a RAG chatbot that answers questions about Costa Rican biodiversity species.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |
xfoxxe/results | xfoxxe | 2024-10-21T06:06:15Z | 197 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:aubmindlab/aragpt2-base",
"base_model:finetune:aubmindlab/aragpt2-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T05:36:31Z | ---
library_name: transformers
base_model: aubmindlab/aragpt2-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [aubmindlab/aragpt2-base](https://huggingface.co/aubmindlab/aragpt2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.6158 | 0.9976 | 258 | 4.4942 |
| 5.394 | 1.9990 | 517 | 4.4081 |
| 5.2155 | 2.9966 | 775 | 4.3753 |
| 5.1119 | 3.9981 | 1034 | 4.3581 |
| 5.2103 | 4.9879 | 1290 | 4.3519 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Subsets and Splits