modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Qevacot-7B-GGUF
|
mradermacher
| 2024-10-17T07:24:55Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qevacot-7B",
"base_model:quantized:bunnycore/Qevacot-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T22:59:12Z |
---
base_model: bunnycore/Qevacot-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qevacot-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qevacot-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-GGUF/resolve/main/Qevacot-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
uinsuska/sd-class-butterflies-64
|
uinsuska
| 2024-10-17T07:22:15Z | 40 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-10-17T07:19:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('uinsuska/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
Huan69/Belle-whisper-large-v3-zh-punct-fasterwhisper
|
Huan69
| 2024-10-17T07:19:23Z | 233 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-10-16T08:40:04Z |
---
license: apache-2.0
---
## Introduction
This model is a modified version of **Belle-whisper-large-v3-zh-punct**, which enhances Chinese punctuation mark capabilities while maintaining strong performance on Chinese ASR benchmarks. The modifications were made to suit specific use cases.
### Citation
If you use this model, please cite the original work:
```bibtex
@misc{BELLE,
author = {BELLEGroup},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```
Original repositories:
- https://github.com/LianjiaTech/BELLE
- https://github.com/shuaijiang/Whisper-Finetune
|
CheeLi03/whisper-base-tr-8
|
CheeLi03
| 2024-10-17T07:17:36Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-17T03:10:42Z |
---
base_model: openai/whisper-base
datasets:
- fleurs
language:
- tr
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Base Turkish 8000 - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: tr_tr
split: None
args: 'config: tr split: test'
metrics:
- type: wer
value: 25.847853142501553
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Turkish 8000 - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5649
- Wer: 25.8479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 850
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.1634 | 5.5866 | 1000 | 0.4092 | 24.8833 |
| 0.0075 | 11.1732 | 2000 | 0.4509 | 24.2066 |
| 0.0024 | 16.7598 | 3000 | 0.4874 | 24.1910 |
| 0.0012 | 22.3464 | 4000 | 0.5125 | 24.3777 |
| 0.0008 | 27.9330 | 5000 | 0.5305 | 24.5644 |
| 0.0005 | 33.5196 | 6000 | 0.5473 | 24.8289 |
| 0.0004 | 39.1061 | 7000 | 0.5592 | 24.9922 |
| 0.0003 | 44.6927 | 8000 | 0.5649 | 25.8479 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Serione/opt-125m-5
|
Serione
| 2024-10-17T07:14:42Z | 187 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T16:42:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
knowledgator/Qwen2-0.5Bchp-690-MultiBio
|
knowledgator
| 2024-10-17T07:11:30Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:08:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
enginia/tiny_fsdp_dbc_171024
|
enginia
| 2024-10-17T07:02:10Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:59:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ckb100/pokemon-image-classifier
|
ckb100
| 2024-10-17T06:59:54Z | 165 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-17T06:59:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CheeLi03/whisper-base-rus-8
|
CheeLi03
| 2024-10-17T06:53:55Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"ru",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-17T02:48:39Z |
---
base_model: openai/whisper-base
datasets:
- fleurs
language:
- ru
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Base Russian 8000 - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: ru_ru
split: None
args: 'config: ru split: test'
metrics:
- type: wer
value: 25.55451630144308
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Russian 8000 - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4957
- Wer: 25.5545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 850
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0635 | 5.4645 | 1000 | 0.3433 | 22.5882 |
| 0.0051 | 10.9290 | 2000 | 0.3879 | 23.0492 |
| 0.0019 | 16.3934 | 3000 | 0.4186 | 23.8976 |
| 0.0011 | 21.8579 | 4000 | 0.4422 | 24.4522 |
| 0.0007 | 27.3224 | 5000 | 0.4613 | 25.0 |
| 0.0005 | 32.7869 | 6000 | 0.4781 | 25.3140 |
| 0.0004 | 38.2514 | 7000 | 0.4907 | 25.4209 |
| 0.0003 | 43.7158 | 8000 | 0.4957 | 25.5545 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Kerneld/klue-roberta-base-klue-sts-mrc
|
Kerneld
| 2024-10-17T06:52:06Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:17552",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Kerneld/klue-roberta-base-klue-sts",
"base_model:finetune:Kerneld/klue-roberta-base-klue-sts",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-17T06:51:43Z |
---
base_model: Kerneld/klue-roberta-base-klue-sts
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:17552
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: LPAT의 시험문제를 출제한 기관은?
sentences:
- '센고쿠 시대의 아이즈 지방은 센고쿠 다이묘 아시나씨가 구로카와(黒川)를 본거지로 하여 지배하고 있었다. 1589년, 다테 마사무네가 아시나
가문을 멸망시키고 이 지역을 차지하였으나 1590년에 도요토미 히데요시에 의해 아이즈 지방 및 주변 지역을 몰수당하였고, 대신 가모 우지사토가
아이즈 42만 석(훗날 92만 석이 됨) 영지를 소유하게 되었다. 우지사토는 구로카와를 와카마쓰(若松)라 개명하였고, 가미가타로부터 상인들을
초빙하여 영지 경영에 공헌하였다. 우지사토의 뒤를 이은 가모 히데유키는 1598년에 우쓰노미야 번 12만 석으로 삭감 전봉되었고, 에치고 국로부터
우에스기 가게카쓰가 고쿠다카 120만 석으로 입번하였다.
그러나 가게카쓰도 세키가하라 전투 때 이시다 미쓰나리의 편에 섰다는 이유로 1601년에 요네자와 번 24만 석으로 삭감 전봉되었고, 당시 도쿠가와
이에야스측에서 싸웠던 가모 히데유키는 고쿠다카가 60만 석으로 추가되어 아이즈로 돌아올 수 있었다. 히데유키의 뒤를 이은 가모 다다사토가 1627년
급사하자, 후사가 없어 영지가 몰수될 뻔하였으나, 그 어머니가 도쿠가와 이에야스의 딸인 관계로, 동생 가모 다다토모가 가모씨의 당주를 이어받고
이요 마쓰야마 번으로 옮겨가 명맥을 유지하게 되었다. 대신 마쓰야마 번으로부터 가토 요시아키가 아이즈로 들어왔다. 하지만 2대 번주 가토 아키나리가
아이즈 소동이라는 후계자 분쟁에 휘말리면서 결국 영지를 반납하게 되었다.
가토씨가 아이즈에서 퇴출당한 1643년, 야마가타 번에 있던 호시나 마사유키가 23만 석으로 아이즈 번에 들어왔다. 마사유키는 2대 쇼군 도쿠가와
히데타다의 사생아이자 3대 쇼군 도쿠가와 이에미쓰의 배다른 동생으로, 이에미쓰의 신뢰를 받아 막부 정치에 중요한 역할을 하였으나, 다케다씨의
유신이었던 양부 호시나 마사미쓰와의 의리상 마쓰다이라로 성을 바꾸지는 않았다. 이후 호시나씨는 3대 마사카타 대에 비로소 마쓰다이라로 개성하였고,
도쿠가와 히데타다의 후손으로 인정받아 신판 다이묘가 되었다. 이후 실제 고쿠다카는 40만 석까지 올라갔으며, 미토 번(도쿠가와 고산케 중 하나)보다도
실수입이 많고 군사력도 강대한 번이 되었다.
막말, 마지막 번주 마쓰다이라 가타모리는 1862년에 교토슈고쇼쿠를 맡았고 신센구미를 휘하에 두어, 존왕양이파 지사들의 관리와 교토 치안유지를
담당했다. 금문의 변 때는 고메이 천황을 탈취하려는 조슈 번 세력으로부터 궁궐을 지켜냈다. 이후 가타모리는 고메이 천황으로부터 아이즈 번에
의지한다는 내용의 서한을 받았다. 고메이 천황 사후, 대정봉환, 왕정복고를 거쳐 보신 전쟁이 발발하자, 가타모리는 구 막부 세력의 중심으로
여겨지면서 신정부군의 주적이 되었고, 고메이 천황의 서한이 있음에도 불구하고 조적(朝敵)으로 간주되었다. 아이즈 번은 오우에쓰 열번동맹의 지원을
받아 쇼나이 번과 함께 동맹을 체결하고 신정부군에 대항하였으나, 아이즈 전쟁에서 패배하고 결국 항복하였다. 아이즈 번 영지는 이때 몰수되었고,
가타모리는 금고형에 처해져 돗토리 번에 유폐되었다. 1869년, 메이지 정부는 가타모리의 아들 마쓰다이라 가타하루가 가문의 명맥을 잇는 것을
허락하고 도나미 번을 세우게 했지만, 아이즈 지역은 메이지 정부의 직할지가 되었고, 와카마쓰 현이 설치되었다가 1876년에 후쿠시마현에 합병되었다.'
- '영어는 홍콩의 공용어 중 하나이다. 그런데 현지 홍콩 사람들의 약 95%는 중국 사람들이며, 이들은 영어를 학교에서 배우는 제2 언어로서
사용하고 있다. 일상 생활에서는 광둥어가 주로 쓰인다.
1997년 홍콩 주권 이양 이후, 영어는 계속 공식적인 언어로 남아 있지만, 새 정부의 방침에 따라 일부 초등 학교 및 중등 학교만이 공식
교재의 언어로서 영어를 쓰는 데 그치고 있다. 반면 대학 및 기업, 법원 등지에서는 영어가 널리 쓰이고 있다.
싱가포르 사람이나 오스트레일리아 사람들과는 달리, 홍콩 사람들은 자신들이 말하는 홍콩식 영어가 어딘가 조금씩 잘못된 영어라고 생각한다. 교육을
잘 받은 사람들은 보통 영국식 영어를 기본으로 약간의 미국식 영어가 섞인 형태의 영어를 구사한다. 단, 개인 수준 차에 따라 수준은 달라질
수 있다.
영어 원어민이 아닌 지역 영어 교사의 영어 수준은 종종 논란 거리가 된 바 있다. 이에 따라 홍콩 교육부는 영어학과 학사 학위가 없는 교사들에게,
그들의 영어 실력이 일정 수준 이상이 되도록 보장하기 위해, LPAT이라는 시험을 통과한 증명을 제출하도록 요구하였으며, LPAT을 통과하지
못한 교사들은 퇴출되었다. 정부에 의해 고용된 교사 이외에, 영어 원어민조차도 이 시험에 떨어질 정도였다. 교사들 중 일부는 시험을 피해 은퇴하기도
하였으나, 많은 수의 교사들이 시험을 통과하지 못하였다.'
- 금오공과대 건축학과를 졸업한 최종섭 씨는 지난해 ‘한·미 대학생 연수취업(WEST)’ 프로그램을 통해 미국에서 인턴으로 근무하던 건축회사 루멘스에
정규직으로 취업했다. 전문대 글로벌 현장학습에 참여했던 정지은 씨(영남이공대 식음료조리계열)도 아랍에미리트(UAE) 두바이의 5성급 호텔인
제벨알리호텔에서 현장실습을 하다 빠른 손놀림과 성실성을 인정받아 현지 취업에 성공했다.해외에서 어학연수나 학점을 이수하며 인턴 체험을 하는
글로벌 현장학습 프로그램을 통해 올해 대학생들이 대거 파견 나간다. 교육부와 한국대학교육협의회, 한국전문대학교육협의회는 3일 ‘2015년 글로벌
현장학습 사업 계획’을 통해 올해 96억9400만원을 들여 학생 1090명을 해외에 파견한다고 발표했다.WEST 프로그램의 경우 올해 390명이
미국에서 어학연수를 받고 정보기술(IT), 금융, 항공, 패션 등 전공과 관련된 분야에서 인턴으로 일한다. 참가자 전원에게 왕복항공료 200만원을
지급하고 어학연수비, 생활비는 소득 수준에 따라 차등적으로 지원한다. 저소득층 학생에게는 최대 2455만원을 지원한다.올해는 기존 18개월과
6개월 외에 12개월 프로그램이 추가됐다. 지원 대상은 4년제대, 전문대 재학생이나 1년 이내 졸업생으로 정부 해외인턴 포털 사이트(www.ggi.go.kr)에서
신청할 수 있다.글로벌 현장학습은 올해 700명이 참가하지만 상반기 선발은 끝났고 7월에 141명을 추가로 뽑는다. 참가 학생은 6개월 동안
외국에서 현장실습을 하며 최대 20학점을 취득할 수 있다. 항공료, 비자 발급비, 보험료 등을 정부로부터 지원받는다.현장학습이지만 본인의 노력에
따라 해외 현지 취업도 가능하다. 김태성(청암대·일본 IT기업 취업), 박원우(인천대·호주 투자운용사) 씨 등이 현장학습을 통해 해외 현지에
취업했다.
- source_sentence: 백만장자들이 가장 많이 해외로 유출된 국가는?
sentences:
- 지난 10년간 백만장자들이 가장 살고 싶어한 나라는 어디일까. 영국의 컨설팅기업 ‘뉴월드웰스’가 발표한 보고서에 따르면 2003~2013년
가장 많은 백만장자가 이주한 곳은 영국이었다. ‘백만장자’의 기준은 주거용 주택을 제외하고 최소 100만달러(약 10억7000만원) 이상 보유한
사람을 뜻한다. 이 시기 영국으로 순유입된 백만장자 수는 11만4100명이었다. 2위를 차지한 싱가포르(4만5000명)의 두 배에 달한다.
씨티그룹이 실시한 별도 설문에서 백만장자들은 경제적 활동, 삶의 질, 우수한 교육 및 주택 여건, 사회적 안전, 신변 보호 등에서 런던을 선호한다고
답했다. 싱가포르는 안전한 환경과 세금우대 정책 때문에 백만장자의 선호도가 높은 것으로 나타났다. 미국(4만2400명)과 호주(2만2200명),
홍콩(1만9700명) 등이 뒤를 이었다. 같은 기간 중국에선 7만6200명의 백만장자가 빠져나갔다. 중국은 스모그 등 환경 오염이 심해지고
‘부패와의 전쟁’이 선포되면서 부유층이 해외로 빠져나가는 사례가 늘고 있다. 이들은 생활 환경이 비슷한 홍콩, 싱가포르 등으로의 이주를 선호하는
것으로 나타났다. 인도에서도 4만3400명의 백만장자가 빠져나갔다. 이어 프랑스(3만1700명), 이탈리아(1만8600명), 러시아(1만4000명)
등의 순이었다.
- "고조선의 도읍지는 여러 차례 이동한 것으로 기록되어 있다. 《삼국유사》는 《고기》를 인용하여, 단군왕검이 처음에는 평양성에 도읍을 정하였으나\
\ 이후 백악산아사달로 옮겨서 1천 5백 년간 나라를 다스렸으며, 이후 주나라 때 기자가 조선왕에 책봉되자, 단군은 장당경으로 옮겼다가 뒤에\
\ 아사달로 돌아왔다고 하였다. 조선민주주의인민공화국은 1970년대 이전까지 고조선의 도읍지를 랴오닝성이라 주장하였으나, 주체사상의 강화 이후에는\
\ 오늘날의 평양시가 고조선의 도읍지라고 주장하면서 단군릉이 평양시에 있다는 점을 내세우고 있다. 이러한 조선민주주의인민공화국 역사학의 입장\
\ 변화는 정치적 영향에 따른 것이라 비판된다 한편 윤내현은 고조선의 도읍지 이동이 총 5차례라고 주장하며 그 위치를 모두 비정하는 연구를\
\ 하기도 하였다. \n\n대한민국 역사학계에서 지배적인 학설인 중심지 이동설에 따르면 고조선은 초기에 랴오둥 반도 지역을 중심으로 발전하다가\
\ 기원전 3세기 무렵 연나라의 침입을 받아 영토를 대거 상실하고 평양 일대로 중심지가 이동하였다고 한다. 고조선의 마지막 왕조인 위만조선의\
\ 도읍지인 왕검성 오늘날 조선민주주의인민공화국의 평양시라는 견해가 지배적이며, 중국의 랴오닝성 지역에 있었다는 소수설도 있다. 기원전 108년\
\ 전한 무제의 공격을 받아 왕검성이 함락됨으로써 고조선이 멸망했다. 왕검성이 있던 곳에는 낙랑군이 설치되어 이후 수세기 동안 중국과 한반도의\
\ 중계무역 기지의 역할을 했다."
- 전국 최대 변호사 밀집지역인 서울 서초동 법조타운에서 일해온 홍승권 변호사(30·변호사시험 1회)는 다음달 서울 약수동으로 사무실을 옮길 예정이다.
법원과 검찰청이 있는 동네에서 아파트 단지가 많고 중소기업도 더러 있는 곳으로 이사를 결정한 것이다. 법조타운 밖으로 나가 ‘생활밀착형 수요’를
개척하겠다는 생각에서다. 홍 변호사는 “집 또는 회사와 가까운 곳에서 법률 상담을 받고자 하는 수요가 있다고 보고 이전을 결심했다”며 “사무실이
지역주민의 눈에 많이 띄어 인지도가 높아지면 인근 시장을 선점할 수 있을 것”이라고 기대했다.일감을 찾아 법원 주변 등 법조타운으로 몰려 들었다가
상가나 주거지역 등으로 빠져 나가는 ‘유턴 변호사’가 늘어나고 있다. 얼마 전까지만 해도 변호사들은 법원·검찰청 인근에서 사무실을 여는 게
보통이었다. 그러나 이런 관행을 깨고 실리를 따라 주거지역이나 오피스지역에서 사무실을 여는 사례가 많아지고 있다. 전국 변호사 수가 2만명을
넘어서는 등 업계 경쟁이 심해진 게 변호사 유턴 현상의 주된 배경이다. 번화가에 있는 법조타운보다 외곽 지역에 있으면 사무실 임차료 등 비용을
아낄 수 있다는 점도 한몫했다. 홍 변호사는 사무실 이전으로 서초동에서 월 150만원씩 내던 사무실 임차료를 70만원으로 줄일 수 있다.김한규
서울지방변호사회 부회장은 “변호사단체에 등록 신고를 할 때 법조타운이 아닌 곳을 사무실 주소지로 쓰는 변호사가 최근 들어 크게 늘었다”고 전했다.
10년차 변호사 A씨는 “소송 의뢰인은 길을 가다 편의점에 들르듯 변호사 사무실에 오는 게 아니라 대부분 소개를 받는 등 미리 알아보고 특정
사무실을 찾아온다”며 “굳이 서초동 법원 앞에 있을 필요가 없다는 점에 다들 공감하는 분위기”라고 말했다. A씨는 “선뜻 이전을 결심하지 못하는
사람도 다른 변호사의 이전 경험담에 관심을 기울이며 고민하는 걸 많이 봤다”고 전했다.서울을 벗어나 아예 지방으로 옮기는 변호사도 있다. 김재윤
변호사(35·사법연수원 42기)는 지난 3월 서초동에서 전남 여수시로 사무실을 옮겼다. 여수는 전남에서 경제 규모가 제일 크고 인구도 29만명으로
많은 편이지만 올해 초까지만 해도 변호사는 3명이 고작이었다. 이 때문에 여수 시민들은 변호사를 찾을 일이 있으면 가장 가까운 법조타운이 있는
광주지방법원 순천지원 앞까지 가는 일이 많았다. 그러나 김 변호사가 들어온 뒤 3명이 더 따라들어와 올해 여수지역 변호사 수가 7명으로 늘었다.
김 변호사는 “사무실을 옮긴 뒤 여수 지역주민의 법률 수요를 잡는 데 어느 정도 성공했다”며 “지금은 서울에 있을 때보다 세 배 이상 많은
수입을 올리고 있다”고 말했다.
- source_sentence: '''탄소 없는 섬'' 프로젝트로 인해 사라지는 직업의 개수는?'
sentences:
- 제주특별자치도와 LG그룹이 손잡고 제주도를 ‘탄소 없는 섬’으로 만든다. 2030년까지 필요한 모든 에너지를 신재생에너지로 충당하고 모든 자동차를
전기자동차로 대체키로 했다. 이를 위해 3조원 규모 특수목적회사(SPC)를 내년 중 설립하기로 했다. “15년내 신재생에너지 비율 100%”원희룡
제주지사와 하현회 (주)LG 사장은 26일 제주도청에서 ‘글로벌 에코 플랫폼 제주’ 추진을 위한 업무협약(MOU·사진)을 맺었다. 글로벌 에코
플랫폼은 제주를 탄소 없이 신재생에너지로만 운영되는 섬으로 만들기 위한 시행방안이다.핵심은 크게 두 가지다. 2030년까지 신재생에너지 발전
비율을 최대 100%로 끌어올리는 것이 첫 번째다. 역시 2030년까지 도내 모든 자동차를 전기자동차로 바꾸는 것이 두 번째다.제주도와 LG그룹은
현재 210㎿ 정도인 풍력·태양광 등 신재생에너지 발전 용량을 2030년까지 3210㎿ 수준으로 늘릴 계획이다. 전기를 저장했다가 필요할 때
꺼내 쓸 수 있게 해주는 에너지저장장치(ESS)도 2030년까지 1300㎿ 규모로 설치한다. 이를 통해 현재 13% 정도인 도내 신재생에너지
발전 비율을 2030년까지 최대 100%(최소 85% 이상)로 끌어올릴 예정이다.제주 내 전기자동차는 현재 852대다. 이를 2030년까지
약 37만7000대로 늘린다는 계획이다. 지금은 79개인 전기자동차 충전소도 1만5000개 수준으로 늘리기로 했다. 이 같은 인프라를 활용해
전기차에 남은 전력을 다시 판매하는 ‘V2G(vehicle to grid)’ 같은 신사업도 추진할 방침이다.여기에 필요한 자금을 충당하기 위해
내년 중 3조원 규모 SPC를 세울 계획이다. SPC는 제주도와 LG그룹이 주축이 되고 한국전력, 전기자동차업체 등이 참여할 예정이다. 가능한
한 민간자본의 참여를 늘리고 정부 지분은 최소화한다는 게 제주도와 LG그룹의 계획이다. SPC는 사업을 통해 발생하는 수익의 일부분을 제주에
재투자하기로 했다. 이 같은 방식으로 총 6조원 이상을 제주 신재생에너지산업에 투자한다는 방침이다.LG그룹 신재생에너지 역량 총동원LG그룹은
제주를 신재생에너지사업의 ‘시범 시장’으로 적극 활용할 계획이다. 신재생에너지는 자동차부품산업과 함께 LG그룹이 추진하고 있는 양대 미래 신사업이다.
LG전자는 태양광 패널 등을 공급하게 된다. LG화학에서 ESS용 배터리를 납품받아 시스템을 구축하는 역할도 맡는다. 건물의 전력을 효율적으로
사용하게 해주는 ‘빌딩관리시스템(BMS)’도 구축한다. LG화학은 전기자동차와 ESS에 2차전지를 공급하게 된다. 제주도는 ‘탄소 없는 섬’
프로젝트를 통해 도내에 5만개 이상의 일자리가 창출될 것으로 기대하고 있다. 원 지사는 “세계 1위의 2차전지 기술력을 갖춘 LG그룹은 제주를
탄소 없는 섬으로 만들기 위한 최적의 파트너”라고 말했다.하 사장은 “글로벌 에코 플랫폼 제주는 창조경제의 전형적인 모델로, IT와 에너지
신기술을 합한 혁신적인 솔루션으로 추진할 것”이라고 설명했다.
- 컴퓨터 해킹으로 낙찰가를 조작해 지방자치단체 관급 공사를 불법 낙찰받아온 일당이 검찰에 적발됐다. 이들은 지자체재무담당자(재무관) PC에 USB(이동식메모리장치)를
꽂아 악성 프로그램을 직접 심는 대담한 수법으로 30여건의 공사를 따냈다. 서울중앙지방검찰청 첨단범죄수사2부(부장검사 김석재)는 해킹을 통해
공사 낙찰 하한가를 조작한 혐의(컴퓨터 등 사용사기) 등으로 프로그램 개발팀 운영자 김모씨(52)와 공사브로커 오모씨(55), 건설업자 김모씨(55)
등 10명을 구속 기소하고 15명을 불구속 기소했다고 4일 밝혔다. 이들이 조작한 것은 2002년 조달청이 도입한 관급공사 전자입찰 시스템인
‘국가종합전자조달(나라장터)’의 입찰 정보다. 이 입찰 시스템은 조달청과 발주처의 재무담당 직원 PC가 암호화된 낙찰 예정가격(예가) 15개를
임의로 만들면 입찰 업체들이 이를 무작위로 2개씩 추첨하는 방식이다. 이 중 가장 많이 나온 4개를 평균내 낙찰 하한가를 정하고 이에 가까운
가격을 써내는 업체가 낙찰받는다. 모든 정보가 암호화돼 사전 입찰·담합이 어렵고 낙찰가를 예상할 수 없어 업계에선 ‘로또’로 통해왔다는 게
검찰의 설명이다. 그러나 이들은 나라장터 시스템보다 상대적으로 보안이 허술한 지자체 재무관의 PC를 공략했다. 평소 관청의 재무담당 직원과
안면이 있는 건설업자 등이 재무관의 PC를 빌려 쓰는 것처럼 위장해 USB로 프로그램을 심은 것. 또 200여개의 경쟁 건설업체에도 입찰 참고자료를
보내는 것처럼 피싱 이메일을 보내 같은 프로그램을 설치했다.검찰 관계자는 “15개 예가 중 자신들이 고른 금액으로 낙찰 하한가를 바꿔치기 해
조달청 서버로 전송하는 프로그램”이라며 “조작한 낙찰 하한가보다 적게는 수십원에서 많게는 1만원 정도 높은 금액을 써내 원하는 공사를 따내왔다”고
설명했다. 2007~2012년 사이 총 20곳이 31건(낙찰가 기준 291억원)의 공사를 이 같은 방식으로 낙찰받아왔다고 검찰은 밝혔다.
봉화군을 비롯해 대부분 경북권에서 적발 됐으며 서울 성북구청에서 발주한 2건도 있다. 일부 업체는 프로그램 개발 과정에 자금을 대거나 개발자
급여를 주기도 한 것으로 드러났다.
- '547년 2월에 벨리사리우스가 돌아왔을때의 로마는 황폐속에 버려진것과 같았다. 성벽복구부터 시작하여 재건을 시도했으나 황제는 남부 이탈리아에서
고트족을 몰아내라고 명하였다. 548년이 되자 황제는 벨리사리우스를 콘스탄티노폴로 소환한후 한직에서 머물도록 하였다. 벨리사리우스가 떠난 이탈리아는
다시 동고트족의 세상이 되어갔다. 황제는 자신의 측근이자 환관인 나르세스를 사령관으로 임명하여 이탈리아 정복전쟁을 마무리 짓도록 했다.
552년 타기나이의 전투에서 토틸라가 전사하면서 동고트족의 세력이 무너지기 시작했다. 553년이 되자 나머지 잔당들이 전멸하며 비잔틴이 이탈리아를
완전히 점령하였다. 그러나 535년부터 553년까지 18년동안이나 진행된 전쟁으로 이탈리아 반도는 황폐화되었다.
황제 유스티니아누스 1세는 548년에 황후 테오도라가 사망한후에 종교문제에만 집착하며 통치에 소홀했다. 아울러 비잔틴 제국의 관리들은 기본적인
치안유지나 해적의 약탈에 대한 적절한 대응등 행정 서비스에는 무관심한 채 백성들로부터 세금만 뜯어가는 무능력함을 들어내었다. 이런 일들은 황제
유스티니아누스 1세가 사망한이후에도 지속되었다. 그래서 북아프라카,중근동,이집트, 발칸반도,이탈리아에서 백성들의 원성이 드높았으며 7세기말에
이교도인 이슬람군이 침공하자 이들을 해방군으로 환영할 정도였다.'
- source_sentence: 박종철이 검찰총장직에서 물러나는데 결정적 역할을 한 제도는?
sentences:
- 경기 침체로 올 들어 국세가 10조원 가까이 줄어든 가운데 징수액이 큰 주요 세목 가운데 종합소득세 세수만 유일하게 늘고 있어 그 배경에 관심이
쏠리고 있다. 개인사업자, 근로 외 소득이 있는 근로자 등이 주로 내는 종합소득세는 경기 영향을 많이 받는 데다 부가가치세 주세 법인세 등의
세수와 비슷한 흐름을 보이게 마련인데, 유독 올해만 종합소득세가 따로 움직이고 있는 것이다. 국세청 직원들이 ‘종합소득세 미스터리’라고 부를
정도다. 서울의 한 세무사는 “종합소득세는 한 해 세수의 가늠자라고 할 정도로 세수 흐름을 가장 잘 반영하는 항목 중 하나”라며 “올해처럼
종합소득세만 늘어나는 것은 매우 이례적”이라고 말했다.○작년보다 5천억 이상 더 걷힐 듯국세청 고위 관계자는 16일 “종합소득세 성실신고확인제
대상(수입 규모가 일정 금액 이상인 개인사업자)의 납부 기한인 지난 1일까지 종합소득세 신고금액은 작년보다 소폭 늘어난 5조원 정도로 잠정
집계됐다”고 밝혔다. 이 같은 추세면 올해 세수는 작년(9조9378억원)보다 5000억~6000억원 많은 10조5000억원에 달할 것이라는
설명이다. 이 같은 증가세는 글로벌 금융위기 영향으로 2010년 종합소득세가 2009년(6조8150억원)보다 줄어든 6조8062억원에 그쳤던
것과도 대조적이다. 올 들어 지난 5월까지 누적 종합소득세도 지난해 3조8588억원보다 4650억원(12.1%) 늘어난 4조3238억원에 달했다.
반면 근로소득세 등 다른 소득세는 지난해 15조5225억원에서 올 5월 15조3904억원으로 감소했다. 법인세 교통세 부가세 교육세 증권거래세
등 다른 세목도 마찬가지다. 법인세는 올 5월 19조9378억원이 걷히는 데 그쳐 작년 같은 기간에 비해 17.89%나 적었다. 부가세도 작년
같은 기간보다 7.22% 줄었고 교통세는 11.62%, 교육세 14.96%, 증권거래세는 무려 24.85%나 급감했다. 불황으로 위스키 등
고가의 술 판매가 감소하면서 주세도 11.06%나 줄었다. 개별소비세와 상속·증여세는 각각 2.10%, 4.29% 감소한 것으로 나타났다.○세무조사
피하고 보자?왜 종합소득세만 늘었는지를 놓고 세정당국과 세무업계에서도 해석이 분분하다. 일부에선 지난해 경기 불황에도 자영업자 수가 늘어난
데다 경기 불황이 개인사업자의 소득에 영향을 미치는 데 시간이 걸린다는 점을 지적하고 있다. 실제로 올해 종합소득세 납부 대상자는 611만명으로
지난해(575만명)보다 36만명(6.26%) 늘었다. 하지만 이 분석은 설득력이 없다는 반론도 있다. 납부 대상자는 작년보다 늘었지만 실제로
낸 사람은 작년과 큰 차이가 없기 때문이다. 경기 불황이 어느 정도의 시간을 지나 개인사업자의 소득에 영향을 미치는지도 검증된 바가 없다.
국세청 내부에선 올 들어 지하경제 양성화를 위해 불성실 납세에 대한 세무조사 확대 의지를 거듭 밝힌 것이 효과를 보고 있다는 관측을 내놓고
있다. 2011년 현재 연수입 5억원 이상 개인사업자의 소득탈루율이 40%에 달했던 점을 감안하면 세무조사 가능성을 우려한 고소득 사업자들이
평소보다 소득신고액을 높였을 여지는 충분하다는 설명이다. 일선 세무사들도 고객에게 “올해만큼은 성실신고를 하는 게 좋겠다”는 권유를 많이 한
것으로 전해지고 있다. 실제 국세청 관계자는 “신고 내역을 보면 지난해와 달리 비용 항목을 많이 줄인 경우가 꽤 된다”며 “조세 회피에 대한
사회의 부정적 분위기 등이 소득세 신고에 영향을 미친 것 같다”고 말했다.
- '룽산 문화는 산둥성 동부의 장추 시 룽산진에 있는 청즈아이(城子崖)에서 1928년에 유적이 출토되어 1930년 이후 본격적으로 발굴되었다.
룽산 문화의 특징은 고온으로 구운 회도, 흑도를 중심으로 한 높은 도기 기술에 있으며, 그릇이 얇고 균일하여 녹로가 사용되고 있었음을 추측할
수 있다. 특히 란각도(卵殻陶)로 불리는 것은 그릇을 알의 껍질처럼 얇게(0.5 – 1 mm) 만든 흑도의 도기로, 한층 더 연마해 검은 윤기를
내고 세밀한 문양을 조각한 것이다. 이것은 황하 유역뿐만 아니라 장강 유역이나 중국의 남부 해안 부근에서도 발견되고 있어 룽산 문화의 확산을
알려주고 있다. 한편으로 장강 중류 지역의 취자링 문화도 회도, 흑도를 특징으로 하는 문화로 허난성 부근에까지 영향을 끼치고 있어 룽산 문화가
장강 부근의 문화의 영향을 받았을 가능성도 있다.
도기 생산 효율의 상승은 출토하는 도기의 수나 종류가 전대 문화에 비해 대폭 증대되었던 것에서도 볼 수 있어 솥이나 역(鬲, 삼족 솥), 규(鬹,
세발달린 가마솥), 높은 자루 잔(高柄杯) 등, 조리기나 식기로서 사용된 다양한 흑도, 회도의 도기가 출토되었다.
도기뿐 아니라 돌칼(石包丁) 등 석기나 골기 등의 무기나 도구, 비취 등의 구슬도 출토되었다. 룽산 문화 후기에는 청동기도 출현하였고, 은대,
주대(또눈 은나라 이전의 하나라)의 청동기 시대로 가는 과도기였다고 생각할 수 있다.
룽산 문화의 사회에서 나타난 가장 큰 변화는 도시의 출현이다. 초기의 주거 형태는 수혈식 주거였지만, 곧 기둥이나 벽을 세운 가옥이 출현했다.
또 흙을 다져 만든 성벽이나 굴이 출토되고 있어, 특히 샨시성 샹펀 현 타오스 향(陶寺郷)의 남쪽에서 발견된 샨시 룽산문화의 유적, 타오시
유적(陶寺遺跡, 기원 전 2500년 - 기원 전 1900년)은 룽산 문화의 도시 유적 중에서도 가장 큰 것이다.
농업이나 수공업의 발달도 특징이다. 샨시성의 웨이허 주변에서는 농업과 목축업이 양사오 문화의 시기에 비해 크게 발전하였다. 쌀의 재배도 시작되었고,
누에를 기르는 양잠업의 존재와 소규모의 견직물 생산도 확인되고 있다.
동물의 견갑골을 사용한 점술이나 무술(巫術)의 흔적도 엿볼 수 있어 종교도 있었던 것으로 보인다. 농업 등의 발달로 인한 잉여 생산이 생기고,
사유재산이 출현하여 사회의 계층화가 진행되었고, 부권제 사회나 계급 사회가 탄생했다. 흑색 토기에서는 회전대를 사용하고 광택을 낸 것을 알수있으므로
제작기술도 발전하였다는 것을 알수있다
중국의 신석기 시대의 인구는 룽산 문화에서 절정을 이루었지만, 룽산 문화의 말기에는 인구가 격감했다. 동시에 분묘의 부장품에서 고품질의 란각도,
흑도 등도 볼 수 없게 되었다.'
- "1937년 대구직할시에서 태어나 경북고등학교와 1961년 서울대학교 법학과를 졸업하고 1962년 제15회 고등고시 사법과에 합격하여 1964년\
\ 대구지방검찰청 검사에 임용되었다. 고등고시 합격이 늦었지만 법무부와 검찰 내에서 요직을 거쳤던 박종철은 차분하고 자상한 성격으로 상하한게\
\ 신망이 두터우며 특히 \"인간성으로 성장한 사람\"이라는 말을 많이 들었다. \n\n1993년 3월에 딸의 이중 국적 논란으로 법무부 장관에서\
\ 물러난 박희태 후임으로 자리를 옮긴 김두희에 이어 제25대 검찰총장에 취임하였던 박종철은 1993년 3월과 9월에 있었던 공직자 재산공개에서\
\ 19억여원의 재산을 등록하면서 투기지역으로 알려진 경기도 용인군 모현면 등지에 임야 5900여평의 부동산을 소유한 것으로 드러나 부동산\
\ 투기 의혹이 제기되어 검찰총장에 취임한지 6개월이 지난 1993년 9월 13일에 \"그동안 검찰이 벌여온 사정 활동과 자기 쇄신의 노력에도\
\ 불구하고 국민들이 기대하는 바에 미치지 못함을 유감스럽게 생각해왔다\"며 \"모든 것이 검찰 총수인 본인이 부덕한 소치로 생각되어 책임을\
\ 통감하고 검찰총장직에서 물러난다\"고 하면서 법무부 장관에게 사표를 제출고 김영삼 대통령은 당일 수리했다. \n\n검찰총장을 마지막으로\
\ 공직에서 물러난 박종철은 1998년 4월에 대검찰청 강력부장을 역임했던 사법시험 제2회 출신의 최신석 등과 법무법인 일원을 창립하면서 대표\
\ 변호사에 취임하였다. 전직 검찰총장으로서 최초의 로펌 설립이다."
- source_sentence: 올 가을 처분되는 오피스텔은 몇 실인가?
sentences:
- 공급과잉 논란으로 분양침체에 빠졌던 주거용 오피스텔 시장에 최근 수요자들의 관심이 쏠리고 있다. ‘8·28 전·월세 대책’에서 오피스텔을 근로자·서민
주택구입자금의 싼 이자로 대출받아 매입할 수 있도록 했기 때문이다. 또한 올 들어 신규 공급이 대폭 줄면서 미분양 물량 해소가 빨라졌고, 기존
오피스텔의 수익률이 5%를 웃도는 등 시장 전체가 회복 조짐을 보이고 있는 것도 한 요인이다.○아파트 같은 대출 지원과 세제혜택16일 부동산업계에
따르면 최근 정부가 오피스텔에 다양한 세제 및 대출 혜택을 주기로 해 관심이 높아지고 있다. 지난해 4월부터 오피스텔에 대한 매입 임대사업자
등록이 허용된 데다 ‘4·1 부동산 대책’에 따라 연말까지 오피스텔을 구입하면 향후 5년간 양도세가 면제된다.‘8·28 대책’에서는 연 2.8~3.6%
수준인 국민주택기금 지원 대상에 6억원 이하 주거용 오피스텔을 포함시켰다. 다만 부부합산 연 소득이 6000만원 이하여야 하고, 최대 대출금은
2억원까지다. 소형 오피스텔 임대사업자의 임대소득에 대한 소득세·법인세 20% 감면도 추진된다. 기준시가 3억원 이하의 소형 주거용 오피스텔
3실 이상을 5년 이상 임대받을 때 혜택을 볼 수 있다.분양마케팅업체인 반더펠트의 호한철 대표는 “서울 마포 광화문 강남 구로와 분당 정자동
등 업무 밀집지역 인근에서는 주거용 오피스텔 수요가 꾸준하다”며 “최근 정부 대책으로 오피스텔 구입자금 지원 혜택 등이 늘어나며 실수요자나
퇴직자들을 중심으로 분양 문의전화가 증가하고 있다”고 말했다.○올 가을 신규 분양 크게 늘어가을 분양 성수기를 맞아 주거용 오피스텔도 잇따라
선보인다. 전국적으로 10여개 단지, 7000실을 웃도는 규모다. 부동산 개발업체인 파크하비오는 다음달 서울 문정동에서 복합단지 ‘송파 파크
하비오’를 분양한다. 오피스텔이 3527실 규모로 서울 지하철 8호선 장지역이 걸어서 3분 거리다.부동산 개발업체인 엠디엠도 같은 달 경기
수원 광교신도시 업무 8블록에서 647실 규모의 ‘광교 레이크파크’를 공급한다. 35·40층 2개동 규모로 광교호수공원(204만㎡)을 내려다볼
수 있는 게 매력이다. 모든 가구가 남향 3개면 개방 형태로 설계됐다. 입주자에게 클럽라운지에서 365일 식사가 제공되는 이색 서비스도 이뤄진다.서울
강남권, 경기 성남시 정자동과 판교신도시 등에 있는 기존 오피스텔도 투자문의가 이어지고 있다. 지난달부터 임대사업을 염두에 둔 투자자들이 매입에
나서면서 거래가 늘고 있다.부동산114에 따르면 서울지역 오피스텔의 연간 평균 임대수익률은 지난달 기준으로 5.45% 수준이다. 경희궁의아침,
스페이스본 등 종로지역 오피스텔의 수익률은 연 10%에 이르는 것으로 조사됐다.
- “구글 검색에서 비슷한 주제의 논문이 있으면 가차 없이 떨어뜨리더라고요. 이처럼 엄격하게 새 아이디어만 평가하는 곳은 처음입니다.” 최근 삼성미래기술육성재단
연구과제에 응모한 대학교수는 이렇게 말했다.삼성미래기술육성재단(이사장 국양 서울대 물리천문학부 교수·사진)은 지난해 8월 연구진흥 목적의 공익
연구재단으로 출범했으며, 삼성전자가 5000억원을 내놨다. 수리과학, 물리, 화학, 생명과학 분야 및 융복합 분야에서 창의적이고 도전적인 연구과제를
선정해 연구부터 특허출원까지 전 과정을 지원한다.재단은 설립 취지에 따라 연구과제를 심사할 때 ‘독창성’을 최우선적으로 들여다본다. 응모자의
이름과 소속은 중요하지 않다. 재단 관계자는 “현재의 틀을 허무는 도전적인 과제를 선정한다는 방침”이라고 설명했다. 세계 유일 또는 세계 최고의
독창적인 프런티어 연구와 실패를 두려워하지 않고 과감히 도전하는 연구를 적극 지원하기 위해서다.학계에서는 이 같은 연구과제 평가 및 지원 방법이
미국의 고등방위연구계획국(DARPA)이나 국가과학재단(NSF)과 비슷하다고 말한다. 이들 기관은 미국 정부 차원에서 10~20년 뒤 먹거리를
찾는 일을 하고 있다. 삼성미래기술육성재단이 이처럼 엄격한 기준에 따라 연구과제를 뽑다 보니 참신하고 혁신적인 주제가 많이 나온다. 이원재
서울대 교수의 ‘장뇌축(gut-brain-axis) 연구’가 대표적이다. 장이 두뇌와 미주신경을 통해 소통하면서 사실상 ‘제2 두뇌’ 역할을
한다는 새로운 이론이다. 층간소음을 혁신적으로 줄여줄 수 있는 ‘스큐메타포러스’ 소재(김윤영 서울대 교수 연구)도 재단 연구과제 선정을 통해
소개됐다.
- 삼성물산과 현대건설은 서울 고덕동 시영아파트를 재건축한 ‘고덕 래미안 힐스테이트’ 분양을 앞두고 그룹 임직원을 예비 수요자로 잡기 위해 적극
나서고 있다. 양사는 지난달 29·30일 서울 문정동 래미안 갤러리에서 삼성·현대차그룹 임직원 2800여명을 초청해 ‘분양 설명회’(사진)를
열었다. 상일동에 본사를 둔 삼성엔지니어링 직원 김모씨(38)는 “직장과 가까우면서도 교통과 교육 등 생활 편의시설이 좋아 청약할 예정”이라고
말했다. 삼성물산은 앞서 27·28일에도 각각 서울 강북과 강남에서 삼성물산 삼성생명 직원을 대상으로 분양 설명회를 열었다.2일 주택업계에
따르면 모델하우스 개장에 앞서 잠재 고객을 대상으로 하는 사전 분양 마케팅이 한층 다양해지고 있다. 분양시장 분위기에 휘둘리지 않는 지역 실수요자들을
공략하기 위해서다. 오는 4일 모델하우스를 여는 GS건설 ‘역삼 자이’는 이미 지난해 말부터 관심 소비자를 대상으로 ‘찾아가는 상담 서비스’를
운영 중이다. 개나리 6차를 재건축하는 이 단지는 분양가가 3.3㎡당 3000만원 전후인 고가 아파트다. 고소득 전문직 종사자들이 주요 잠재
고객이다. 구매력은 있지만 모델하우스를 찾을 시간이 없는 전문직 종사자들을 고객으로 확보하기 위해 직접 방문해 상담을 벌인다. 경기 하남 미사지구에서
분양을 앞둔 포스코건설 ‘더샵 리버포레’도 홈페이지를 통해 관심고객으로 등록한 예비 청약자들에게 최신 LED TV 등을 선물로 제공하는 등
사전 마케팅에 한창이다.
model-index:
- name: SentenceTransformer based on Kerneld/klue-roberta-base-klue-sts
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8126195756472175
name: Pearson Cosine
- type: spearman_cosine
value: 0.8198976037040269
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7642085472769107
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7843446052387523
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7619389784108498
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7826892530796743
name: Spearman Euclidean
- type: pearson_dot
value: 0.7933466082215178
name: Pearson Dot
- type: spearman_dot
value: 0.8150607506269372
name: Spearman Dot
- type: pearson_max
value: 0.8126195756472175
name: Pearson Max
- type: spearman_max
value: 0.8198976037040269
name: Spearman Max
---
# SentenceTransformer based on Kerneld/klue-roberta-base-klue-sts
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Kerneld/klue-roberta-base-klue-sts](https://huggingface.co/Kerneld/klue-roberta-base-klue-sts). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Kerneld/klue-roberta-base-klue-sts](https://huggingface.co/Kerneld/klue-roberta-base-klue-sts) <!-- at revision ab23b26f49367384954ec980ce849944828f97b7 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'올 가을 처분되는 오피스텔은 몇 실인가?',
'공급과잉 논란으로 분양침체에 빠졌던 주거용 오피스텔 시장에 최근 수요자들의 관심이 쏠리고 있다. ‘8·28 전·월세 대책’에서 오피스텔을 근로자·서민 주택구입자금의 싼 이자로 대출받아 매입할 수 있도록 했기 때문이다. 또한 올 들어 신규 공급이 대폭 줄면서 미분양 물량 해소가 빨라졌고, 기존 오피스텔의 수익률이 5%를 웃도는 등 시장 전체가 회복 조짐을 보이고 있는 것도 한 요인이다.○아파트 같은 대출 지원과 세제혜택16일 부동산업계에 따르면 최근 정부가 오피스텔에 다양한 세제 및 대출 혜택을 주기로 해 관심이 높아지고 있다. 지난해 4월부터 오피스텔에 대한 매입 임대사업자 등록이 허용된 데다 ‘4·1 부동산 대책’에 따라 연말까지 오피스텔을 구입하면 향후 5년간 양도세가 면제된다.‘8·28 대책’에서는 연 2.8~3.6% 수준인 국민주택기금 지원 대상에 6억원 이하 주거용 오피스텔을 포함시켰다. 다만 부부합산 연 소득이 6000만원 이하여야 하고, 최대 대출금은 2억원까지다. 소형 오피스텔 임대사업자의 임대소득에 대한 소득세·법인세 20% 감면도 추진된다. 기준시가 3억원 이하의 소형 주거용 오피스텔 3실 이상을 5년 이상 임대받을 때 혜택을 볼 수 있다.분양마케팅업체인 반더펠트의 호한철 대표는 “서울 마포 광화문 강남 구로와 분당 정자동 등 업무 밀집지역 인근에서는 주거용 오피스텔 수요가 꾸준하다”며 “최근 정부 대책으로 오피스텔 구입자금 지원 혜택 등이 늘어나며 실수요자나 퇴직자들을 중심으로 분양 문의전화가 증가하고 있다”고 말했다.○올 가을 신규 분양 크게 늘어가을 분양 성수기를 맞아 주거용 오피스텔도 잇따라 선보인다. 전국적으로 10여개 단지, 7000실을 웃도는 규모다. 부동산 개발업체인 파크하비오는 다음달 서울 문정동에서 복합단지 ‘송파 파크 하비오’를 분양한다. 오피스텔이 3527실 규모로 서울 지하철 8호선 장지역이 걸어서 3분 거리다.부동산 개발업체인 엠디엠도 같은 달 경기 수원 광교신도시 업무 8블록에서 647실 규모의 ‘광교 레이크파크’를 공급한다. 35·40층 2개동 규모로 광교호수공원(204만㎡)을 내려다볼 수 있는 게 매력이다. 모든 가구가 남향 3개면 개방 형태로 설계됐다. 입주자에게 클럽라운지에서 365일 식사가 제공되는 이색 서비스도 이뤄진다.서울 강남권, 경기 성남시 정자동과 판교신도시 등에 있는 기존 오피스텔도 투자문의가 이어지고 있다. 지난달부터 임대사업을 염두에 둔 투자자들이 매입에 나서면서 거래가 늘고 있다.부동산114에 따르면 서울지역 오피스텔의 연간 평균 임대수익률은 지난달 기준으로 5.45% 수준이다. 경희궁의아침, 스페이스본 등 종로지역 오피스텔의 수익률은 연 10%에 이르는 것으로 조사됐다.',
'“구글 검색에서 비슷한 주제의 논문이 있으면 가차 없이 떨어뜨리더라고요. 이처럼 엄격하게 새 아이디어만 평가하는 곳은 처음입니다.” 최근 삼성미래기술육성재단 연구과제에 응모한 대학교수는 이렇게 말했다.삼성미래기술육성재단(이사장 국양 서울대 물리천문학부 교수·사진)은 지난해 8월 연구진흥 목적의 공익 연구재단으로 출범했으며, 삼성전자가 5000억원을 내놨다. 수리과학, 물리, 화학, 생명과학 분야 및 융복합 분야에서 창의적이고 도전적인 연구과제를 선정해 연구부터 특허출원까지 전 과정을 지원한다.재단은 설립 취지에 따라 연구과제를 심사할 때 ‘독창성’을 최우선적으로 들여다본다. 응모자의 이름과 소속은 중요하지 않다. 재단 관계자는 “현재의 틀을 허무는 도전적인 과제를 선정한다는 방침”이라고 설명했다. 세계 유일 또는 세계 최고의 독창적인 프런티어 연구와 실패를 두려워하지 않고 과감히 도전하는 연구를 적극 지원하기 위해서다.학계에서는 이 같은 연구과제 평가 및 지원 방법이 미국의 고등방위연구계획국(DARPA)이나 국가과학재단(NSF)과 비슷하다고 말한다. 이들 기관은 미국 정부 차원에서 10~20년 뒤 먹거리를 찾는 일을 하고 있다. 삼성미래기술육성재단이 이처럼 엄격한 기준에 따라 연구과제를 뽑다 보니 참신하고 혁신적인 주제가 많이 나온다. 이원재 서울대 교수의 ‘장뇌축(gut-brain-axis) 연구’가 대표적이다. 장이 두뇌와 미주신경을 통해 소통하면서 사실상 ‘제2 두뇌’ 역할을 한다는 새로운 이론이다. 층간소음을 혁신적으로 줄여줄 수 있는 ‘스큐메타포러스’ 소재(김윤영 서울대 교수 연구)도 재단 연구과제 선정을 통해 소개됐다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.8126 |
| spearman_cosine | 0.8199 |
| pearson_manhattan | 0.7642 |
| spearman_manhattan | 0.7843 |
| pearson_euclidean | 0.7619 |
| spearman_euclidean | 0.7827 |
| pearson_dot | 0.7933 |
| spearman_dot | 0.8151 |
| pearson_max | 0.8126 |
| **spearman_max** | **0.8199** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 17,552 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.8 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 241 tokens</li><li>mean: 438.87 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>한국콘텐츠진흥원장상을 받은 곳은?</code> | <code>㈜연필과지우개(대표 정일)가 자사의 인기 애니메이션 ‘에그로이’를 인형극으로 제작해 9월부터 관객을 직접 찾아가는 공연 서비스를 시작한다고 밝혔다. ‘에그로이’ 인형극은 언택트 시대를 맞아 방문 공연을 신청한 관객이 있는 곳으로 직접 찾아가 공연하거나 영상을 통해 제공하는 방식으로 제작될 예정이다. 이번 ‘에그로이’ 인형극 기획은 한국과학창의재단의 과학문화바우처 상품 공모 선정을 통해 이루어졌으며, 서울 전지역에서 과학문화바우처를 이용해 신청 가능하다. 작품명 ‘어둠의 비밀을 찾아서’로 명명된 ‘에그로이’ 인형극은 손인형극 형식의 약 50분 공연으로 구성되어 있고, 코로나19로 공연이 불가능한 곳은 USB 영상으로 전달한다. 공연 내용은 7세 이상의 아동을 대상으로 다양한 과학 상식과 원리가 우리 생활에 존재한다는 것을 보여주고, 생활 주변에서 평소 그냥 지나쳤던 것들을 과학적 영감으로 볼 수 있게 지적 호기심을 제공한다. ㈜연필과지우개에서는 과학 원리를 스토리 속의 캐릭터에 공감하면서 몰입해 보고 흥미를 누릴 수 있도록 준비했다고 전했다. 제작진은 총괄 정일 대표와 기획 원승준 이사, 홍보/마케팅 팀장 Albayrak Merve Gul, 미디어팀장 Ochieng Joshua Wera, 연출 김미영, 조연출 송영진, 최윤정, 임장현, 인형극 전문 배우 김미란, 권하은, 이인화, 이지원, 임서연, 예승미, 전가영, 전하영 등이 공연을 한다. ‘에그로이’는 최근 제작 완료를 앞둔 각각 1분 30초 길이인 100편의 애니메이션으로, 귀여운 달걀들과 도마뱀 요리사의 좌충우돌 스토리를 담고 있다. 아직 제작중인 애니메이션이지만 유튜브 등록 2주 만에 글로벌 조회수 100만 회를 넘기는 등 해외에서 벌써 작품성을 인정받아 활발하게 계약이 진행되고 있다. ㈜연필과지우개에서는 ‘에그로이’가 이미 미국, 중국, 대만, 인도, 인도네시아, 베트남, 인도, 체코, 브라질 등 9개 국가와 선계약을 완료했다고 밝혔다. 제작사 ㈜연필과지우개는 캐릭터 기반 콘텐츠 전문 기획 제작사이다. 2018년부터 한국콘텐츠 진흥원 CKL 기업지원센터에 입주해 있으며, 스타트업 리그 한국콘텐츠진흥원장상 수상, 대통령 순방 경제사절단 2회 참가, 아파트 브랜드 대우건설과 키즈카페 완공 등 국내에서도 높은 평가를 받고 있다. ㈜연필과지우개는 이번 인형극 제작을 통해 애니메이션에서 공연, TV인형극, 인형 판매 등으로 파생 상품화를 본격적으로 진행할 예정이다. 9월부터 찾아가는 인형극과 10월부터 OTT용 11분 길이의 52부작 해외 수출용 TV 인형극을 출시할 예정이다. ㈜연필과지우개 정일 대표는 “앞으로도 아동들이 과학 공연으로 과학에 대한 지속적인 관심과 미래 과학자의 꿈을 가질 수 있도록 이번 공연을 기획했다”며, “쎄서미스트리트처럼 전세계인에게 사랑받는 인형극을 제작하는 것을 목표로 하고 있다.”고 말했다.</code> |
| <code>MICE 유치, 개최 활동을 지원하는 기관의 수장 이름은?</code> | <code>부산관광공사(사장 정희준)는 올해 부산의 MICE 산업 성장에 공로가 큰 주요인사 선정, 16일 부산힐튼호텔에서 부산 MICE 앰버서더 어워드 행사를 개최했다. 부산MICE 앰버서더는 ▲대형 국제회의 부산 유치, 개최에 기여한 인사 ▲국내학회·협회 임원 및 국제기구의 회원으로 활발하게 활동하는 인사 ▲국제회의 유치 정보를 부산에 지속적으로 제공하고 기여한 인사로 선정했다. 올해 부산MICE 앰버서더에는 부산대학교 선박해양플랜트기술연구원 백점기 원장을 비롯, 한국국제물류협회 김병진 회장, 창원대학교 신기삼 교수 등 17명이 선정되었다. 올해 앰버서더로 선정된 이들은 부산시, 부산관광공사, 부산지역업계와 힘을 합쳐 2022 세계현미경총회, 2021 아시아-오세아니아 면역학회 총회, 2022 국제내연기관협회 세계총회 등 굵직한 회의들을 부산으로 유치하는 성과를 거두었으며 연간활동을 통해 마이스도시 부산을 전세계에 알리는 데 주요한 역할을 해왔다. 부산MICE 앰버서더로 선정된 인사는 위촉패와 더불어 부산관광공사로부터 MICE유치개최와 관련된 연간 활동을 지원받게 된다. 이날 위촉식 행사에는 부산시 관계자를 비롯해 벡스코, 영화의 전당, 지역PCO, 호텔 등 부산 MICE 관련 유관기관 대표들이 함께 참석해 MICE 앰버서더들과 간담회를 갖고 부산 MICE 발전을 위한 협력방안도 모색하였다.</code> |
| <code>치료에 실패한 혈액암 환자의 수는?</code> | <code>T 세포 기반 차세대 면역 치료제 연구개발 전문 기업 네오이뮨텍은 지난달 자사의 ‘NT-I7’(efineptakin alfa)과 글로벌 제약 기업 로슈(Roche)의 면역관문억제제(PD-L1 저해제) ‘Tecentriq®’(티센트릭, atezolizumab)과의 병용 투여에 대한 공동임상 계약을 체결한 데 이어, 최근 FDA로부터 비소세포폐암(NSCLC) 1차 치료제의 임상2상 계획(IND) 승인을 획득함에 따라, 미국 현지에서 임상2상을 진행할 예정이라고 밝혔다. NT-I7은 네오이뮨텍이 개발 중인 T 세포의 증폭을 유도하는 First-in-Class 차세대 면역 항암제로, 단독 요법의 효능뿐 아니라 기존 항암치료제와 병용 투여 시 치료 효과의 시너지가 기대되는 신약이다. 각종 고형암 및 혈액암, 희귀질환, 감염성 질환 환자에 대한 임상을 계획 또는 진행 중이다. 지난 3년간 머크, 로슈, BMS 등 글로벌 선도기업들이 면역항암제의 효능을 증가시킬 수 있는 주요 파트너로 NT-I7에 주목하고 현재 4건의 병용임상에 대한 계약을 체결하고 임상을 진행하고 있다. 비소세포폐암은 폐암의 80~85%를 차지하고 있으며, 암사망의 주요 원인으로 꼽힌다. 2018년에는 전 세계적으로 약 210만 명의 환자가 비소세포폐암 진단을 받고 이 중 약 176만 명이 사망에 이를 정도로 혁신 치료제 개발이 시급한 질환이다. 네오이뮨텍은 이번 임상2상을 통해 치료 경험이 없는 4기 비소세포폐암 환자들에 대한 NT-I7과 Tecentriq®의 병용 치료 항암효과를 평가함으로써 비소세포폐암 1차 치료제로서의 효능을 검증할 예정이다. 앞선 면역관문억제제와의 병용 투여 임상1상을 통해 확인한 안전성 및 효능 결과를 고려하여 1,200㎍/㎏을 NT-I7의 2상을 위한 권장용량으로 결정했다. 양세환 네오이뮨텍 대표이사는 “Tecentriq®과 NT-I7을 병용하면, Tecentriq®을 단일제제로 사용하는 것보다 치료 효능이 증가할 뿐만 아니라, PD-L1 발현이 낮은 비소세포폐암 환자에게도 더 나은 효능을 제공할 것으로 기대된다”면서 “또한 기존의 화학적 치료를 견디기 어려워 받지 못하는 비소세포폐암 환자들에게 새로운 대안을 제시함으로써, 치료 혜택을 받을 수 있는 대상 환자도 더욱 확대될 것으로 기대된다”고 설명했다. 이어 “특히 이번 임상2상은 2차, 3차 치료 옵션이 아닌 1차 치료제로서의 안전성과 효능을 입증하기 위한 과정으로서 의미가 더욱 크다”고 강조했다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_max |
|:------:|:----:|:-------------:|:------------:|
| 0 | 0 | - | 0.8199 |
| 0.4558 | 500 | 0.1651 | - |
| 0.9116 | 1000 | 0.113 | - |
### Framework Versions
- Python: 3.8.20
- Sentence Transformers: 3.2.0
- Transformers: 4.45.2
- PyTorch: 2.0.1
- Accelerate: 0.26.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Qevacot-7B-v2-GGUF
|
mradermacher
| 2024-10-17T06:52:06Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qevacot-7B-v2",
"base_model:quantized:bunnycore/Qevacot-7B-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:17:29Z |
---
base_model: bunnycore/Qevacot-7B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qevacot-7B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qevacot-7B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jerry666/GOT-OCR2_0-716M-BF16-GGUF
|
Jerry666
| 2024-10-17T06:51:44Z | 232 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-16T09:09:44Z |
# Release
- 2024.10.16: [GOT-OCR2_0-716M-BF16-GGUF](https://huggingface.co/Jerry666/GOT-OCR2_0-716M-BF16-GGUF)
# Description
[gguf-py](https://github.com/jerrylsu/gguf-py) is a Python package for writing binary files in the GGUF based on llama_cpp.
# Usage
`
python convert_hf_to_gguf.py --outtype bf16 --model ~/GOT-OCR2_0 --outfile ~/output/GOT-OCR2_0-GGUF
`
# Adding Supported Model
[GOT_OCR2](https://huggingface.co/stepfun-ai/GOT-OCR2_0)
Continue...
# References
[llama.cpp](https://github.com/ggerganov/llama.cpp): LLM inference in C/C++.
[GOT-OCR2.0](https://github.com/Ucas-HaoranWei/GOT-OCR2.0): Official code implementation of General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model.
|
boltxz/codeparrot-small
|
boltxz
| 2024-10-17T06:50:48Z | 202 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:50:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unsloth/Llama-3.1-Nemotron-70B-Instruct
|
unsloth
| 2024-10-17T06:49:25Z | 27 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"nvidia",
"llama3.1",
"unsloth",
"conversational",
"en",
"dataset:nvidia/HelpSteer2",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:finetune:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T02:00:37Z |
---
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
datasets:
- nvidia/HelpSteer2
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- nvidia
- llama3.1
- unsloth
- llama
---
# Finetune Llama 3.2, NVIDIA Nemotron, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.1-Nemotron-70B-Instruct
For more details on the model, please go to NVIDIA's original [model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating these models and for NVIDIA fine-tuning them and releasing them.
|
unsloth/Llama-3.1-Nemotron-70B-Instruct-GGUF
|
unsloth
| 2024-10-17T06:49:07Z | 113 | 1 |
transformers
|
[
"transformers",
"gguf",
"nvidia",
"llama3.1",
"unsloth",
"llama",
"text-generation",
"en",
"dataset:nvidia/HelpSteer2",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-17T05:19:03Z |
---
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
datasets:
- nvidia/HelpSteer2
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- nvidia
- llama3.1
- unsloth
- llama
---
# Finetune Llama 3.2, NVIDIA Nemotron, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.1-Nemotron-70B-Instruct-GGUF
For more details on the model, please go to NVIDIA's original [model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating these models and for NVIDIA fine-tuning them and releasing them.
|
rahul-bhoyar-1995/bart-cnn-samsum-finetuned
|
rahul-bhoyar-1995
| 2024-10-17T06:45:41Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-17T06:45:04Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6464 | 1.0 | 19 | 0.1365 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.2.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
SongTonyLi/Llama-3.2-1B-Instruct-SFT-D_chosen-pref-mix2
|
SongTonyLi
| 2024-10-17T06:45:32Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-01T07:30:00Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abdelrahmanelsheikh39/SentimentAnalysisAtDEPI2
|
abdelrahmanelsheikh39
| 2024-10-17T06:45:00Z | 119 | 1 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T06:44:31Z |
---
library_name: transformers
base_model: cardiffnlp/twitter-roberta-base-sentiment
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SentimentAnalysisAtDEPI2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentAnalysisAtDEPI2
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4876
- Accuracy: 0.8480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5493 | 1.0 | 14212 | 0.5040 | 0.8133 |
| 0.38 | 2.0 | 28424 | 0.4682 | 0.8371 |
| 0.3531 | 3.0 | 42636 | 0.4678 | 0.8433 |
| 0.3067 | 4.0 | 56848 | 0.4876 | 0.8480 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Triangle104/Qwen2.5-7B-Instruct-abliterated-v2-Q4_K_M-GGUF
|
Triangle104
| 2024-10-17T06:40:42Z | 9 | 1 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-02T10:59:50Z |
---
base_model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-7B-Instruct-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) for more details on the model.
---
Model details:
-
This is an uncensored version of Qwen/Qwen2.5-7B-Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
Important Note This version is an improvement over the previous one Qwen2.5-7B-Instruct-abliterated.
Usage
You can use this model in your applications by loading it with Hugging Face's transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
Evaluations
The following data has been re-evaluated and calculated as the average for each test.
Benchmark Qwen2.5-7B-Instruct Qwen2.5-7B-Instruct-abliterated-v2 Qwen2.5-7B-Instruct-abliterated
IF_Eval 76.44 77.82 76.49
MMLU Pro 43.12 42.03 41.71
TruthfulQA 62.46 57.81 64.92
BBH 53.92 53.01 52.77
GPQA 31.91 32.17 31.97
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-abliterated-v2-q4_k_m.gguf -c 2048
```
|
Niraya666/wmc_v2_swin-tiny-patch4-window7-224_base_wm811k_cls_contra_learning_1017_6_cls
|
Niraya666
| 2024-10-17T06:37:34Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-17T06:37:21Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: wmc_v2_swin-tiny-patch4-window7-224_base_wm811k_cls_contra_learning_1017_6_cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wmc_v2_swin-tiny-patch4-window7-224_base_wm811k_cls_contra_learning_1017_6_cls
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0632
- Accuracy: 0.9760
- Precision: 0.9607
- Recall: 0.9621
- F1: 0.9611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.131 | 0.1697 | 100 | 0.6398 | 0.7195 | 0.6527 | 0.5655 | 0.5087 |
| 0.6353 | 0.3394 | 200 | 0.2183 | 0.9314 | 0.8628 | 0.8571 | 0.8592 |
| 0.5429 | 0.5091 | 300 | 0.1273 | 0.9594 | 0.9347 | 0.8905 | 0.9073 |
| 0.4432 | 0.6788 | 400 | 0.1040 | 0.9680 | 0.9322 | 0.9489 | 0.9397 |
| 0.4109 | 0.8485 | 500 | 0.0998 | 0.9697 | 0.9508 | 0.9442 | 0.9474 |
| 0.3775 | 1.0182 | 600 | 0.1209 | 0.9573 | 0.9323 | 0.9382 | 0.9326 |
| 0.3661 | 1.1880 | 700 | 0.0968 | 0.9697 | 0.9313 | 0.9579 | 0.9428 |
| 0.3609 | 1.3577 | 800 | 0.0879 | 0.9707 | 0.9479 | 0.9524 | 0.9498 |
| 0.3393 | 1.5274 | 900 | 0.0785 | 0.9734 | 0.9536 | 0.9566 | 0.9547 |
| 0.3242 | 1.6971 | 1000 | 0.0773 | 0.9732 | 0.9456 | 0.9626 | 0.9533 |
| 0.3307 | 1.8668 | 1100 | 0.0626 | 0.9781 | 0.9631 | 0.9601 | 0.9615 |
| 0.3325 | 2.0365 | 1200 | 0.0662 | 0.9767 | 0.9575 | 0.9637 | 0.9603 |
| 0.2889 | 2.2062 | 1300 | 0.0609 | 0.9780 | 0.9526 | 0.9673 | 0.9594 |
| 0.2818 | 2.3759 | 1400 | 0.0656 | 0.9762 | 0.9566 | 0.9619 | 0.9588 |
| 0.3038 | 2.5456 | 1500 | 0.0561 | 0.9795 | 0.9666 | 0.9633 | 0.9647 |
| 0.2823 | 2.7153 | 1600 | 0.0610 | 0.9781 | 0.9590 | 0.9668 | 0.9626 |
| 0.2478 | 2.8850 | 1700 | 0.0632 | 0.9760 | 0.9607 | 0.9621 | 0.9611 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
COGNANO/VHHBERT
|
COGNANO
| 2024-10-17T06:35:53Z | 124 | 1 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"biology",
"protein",
"antibody",
"VHH",
"dataset:COGNANO/VHHCorpus-2M",
"arxiv:2405.18749",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-11T01:42:46Z |
---
license: mit
datasets:
- COGNANO/VHHCorpus-2M
library_name: transformers
tags:
- biology
- protein
- antibody
- VHH
---
## VHHBERT
VHHBERT is a RoBERTa-based model pre-trained on two million VHH sequences in [VHHCorpus-2M](https://huggingface.co/datasets/COGNANO/VHHCorpus-2M).
VHHBERT has the same model parameters as RoBERTa<sub>BASE</sub>, except that it used positional embeddings with a length of 185 to cover the maximum sequence length of 179 in VHHCorpus-2M.
Further details on VHHBERT are described in our paper "[A SARS-CoV-2 Interaction Dataset and VHH Sequence Corpus for Antibody Language Models](https://arxiv.org/abs/2405.18749).”
## Usage
The model and tokenizer can be loaded using the `transformers` library.
```python
from transformers import BertTokenizer, RobertaModel
tokenizer = BertTokenizer.from_pretrained("COGNANO/VHHBERT")
model = RobertaModel.from_pretrained("COGNANO/VHHBERT")
```
## Links
- Pre-training Corpus: https://huggingface.co/datasets/COGNANO/VHHCorpus-2M
- Code: https://github.com/cognano/AVIDa-SARS-CoV-2
- Paper: https://arxiv.org/abs/2405.18749
## Citation
If you use VHHBERT in your research, please cite the following paper.
```bibtex
@inproceedings{tsuruta2024sars,
title={A {SARS}-{C}o{V}-2 Interaction Dataset and {VHH} Sequence Corpus for Antibody Language Models},
author={Hirofumi Tsuruta and Hiroyuki Yamazaki and Ryota Maeda and Ryotaro Tamura and Akihiro Imura},
booktitle={Advances in Neural Information Processing Systems 37},
year={2024}
}
```
|
mradermacher/QevaCoT-7B-Stock-i1-GGUF
|
mradermacher
| 2024-10-17T06:32:05Z | 963 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/QevaCoT-7B-Stock",
"base_model:quantized:bunnycore/QevaCoT-7B-Stock",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-17T05:32:51Z |
---
base_model: bunnycore/QevaCoT-7B-Stock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/QevaCoT-7B-Stock
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QevaCoT-7B-Stock-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jjaegii/Llama-3.1-8B-LoRA-kolon-sg-v2-merged-GPTQ-INT4
|
jjaegii
| 2024-10-17T06:25:49Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-10-17T03:00:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF
|
Triangle104
| 2024-10-17T06:20:57Z | 7 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T06:19:08Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF
This model was converted to GGUF format from [`ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) for more details on the model.
---
Model details:
-
UPDATE: For those getting gibberish results, it was merged wrongly to base after LORA training. Reuploaded all the files so it should work properly now.
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
We also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk
Model Description
ArliAI-RPMax-12B-v1.2 is a variant based on Mistral Nemo 12B Instruct 2407.
This is arguably the most successful RPMax model due to how Mistral is already very uncensored in the first place.
v1.2 update completely removes non-creative/RP examples in the dataset and is also an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites.
Specs
Context Length: 128K
Parameters: 12B
Training Details
Sequence Length: 8192
Training Duration: Approximately 2 days on 2x3090Ti
Epochs: 1 epoch training for minimized repetition sickness
LORA: 64-rank 128-alpha, resulting in ~2% trainable weights
Learning Rate: 0.00001
Gradient accumulation: Very low 32 for better learning.
Quantization
The model is available in quantized formats:
FP16: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
GGUF: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GGUF
Suggested Prompt Format
Mistral Instruct Prompt Format
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -c 2048
```
|
Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF
|
Triangle104
| 2024-10-17T06:11:58Z | 9 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T05:58:15Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) for more details on the model.
---
Model details:
-
UPDATE: For those getting gibberish results, it was merged wrongly to base after LORA training. Reuploaded all the files so it should work properly now.
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
We also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk
Model Description
ArliAI-RPMax-12B-v1.2 is a variant based on Mistral Nemo 12B Instruct 2407.
This is arguably the most successful RPMax model due to how Mistral is already very uncensored in the first place.
v1.2 update completely removes non-creative/RP examples in the dataset and is also an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites.
Specs
Context Length: 128K
Parameters: 12B
Training Details
Sequence Length: 8192
Training Duration: Approximately 2 days on 2x3090Ti
Epochs: 1 epoch training for minimized repetition sickness
LORA: 64-rank 128-alpha, resulting in ~2% trainable weights
Learning Rate: 0.00001
Gradient accumulation: Very low 32 for better learning.
Quantization
The model is available in quantized formats:
FP16: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
GGUF: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GGUF
Suggested Prompt Format
Mistral Instruct Prompt Format
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -c 2048
```
|
Aldrich12/my-fine-tuned-model-ppo
|
Aldrich12
| 2024-10-17T06:10:00Z | 202 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:08:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gkMSDA/FinChat298_Solar248M_Pretrain_DJ30_Model_V2
|
gkMSDA
| 2024-10-17T06:04:30Z | 132 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:03:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
win10/dolphin-2.9.3-mistral-nemo-20b-V2
|
win10
| 2024-10-17T05:50:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b",
"base_model:finetune:cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T05:44:44Z |
---
base_model:
- cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
- sources:
- layer_range: [8, 24]
model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
- sources:
- layer_range: [16, 32]
model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
- sources:
- layer_range: [24, 40]
model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
```
|
Falah/haider_al_abadi
|
Falah
| 2024-10-17T05:49:24Z | 7 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-17T04:43:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Haider_Al_Abadi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Falah/haider_al_abadi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-8bit
|
mlx-community
| 2024-10-17T05:46:58Z | 61 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"nvidia",
"llama3.1",
"mlx",
"conversational",
"en",
"dataset:nvidia/HelpSteer2",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] |
text-generation
| 2024-10-16T16:59:23Z |
---
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
datasets:
- nvidia/HelpSteer2
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- nvidia
- llama3.1
- mlx
inference: false
fine-tuning: false
---
# mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-8bit
The Model [mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-8bit](https://huggingface.co/mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-8bit) was converted to MLX format from [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) using mlx-lm version **0.19.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-4bit
|
mlx-community
| 2024-10-17T05:45:10Z | 27 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"nvidia",
"llama3.1",
"mlx",
"conversational",
"en",
"dataset:nvidia/HelpSteer2",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] |
text-generation
| 2024-10-17T04:48:10Z |
---
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
datasets:
- nvidia/HelpSteer2
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- nvidia
- llama3.1
- mlx
inference: false
fine-tuning: false
---
# mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-4bit
The Model [mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-4bit](https://huggingface.co/mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-4bit) was converted to MLX format from [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) using mlx-lm version **0.19.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF
|
mradermacher
| 2024-10-17T05:44:06Z | 40 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.2",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:10:53Z |
---
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF
|
mradermacher
| 2024-10-17T05:44:05Z | 31 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.2",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-17T05:05:38Z |
---
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.3 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.3 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.2-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.2.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Inabia-AI/ark_unbranded_claims_standalone_lora_3.1_10162024
|
Inabia-AI
| 2024-10-17T05:40:20Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T05:31:25Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Inabia-AI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BEGADE/bot
|
BEGADE
| 2024-10-17T05:18:18Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2024-10-17T04:54:50Z |
---
base_model: openai-community/gpt2
library_name: peft
license: mit
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: bot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bot
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Paoloc99/litm_model_new_reg_100_0
|
Paoloc99
| 2024-10-17T05:18:08Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-17T05:17:40Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timaeus/H8-dh8
|
timaeus
| 2024-10-17T05:13:09Z | 5 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-17T03:23:54Z |
# H8-dh8 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
maicog/Llama-3.2-1B-Instruct-abliterated-Q4_K_S-GGUF
|
maicog
| 2024-10-17T05:04:48Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Llama-3.2-1B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-1B-Instruct-abliterated",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T05:04:40Z |
---
base_model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
library_name: transformers
license: llama3.2
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# maicog/Llama-3.2-1B-Instruct-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.2-1B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo maicog/Llama-3.2-1B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo maicog/Llama-3.2-1B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo maicog/Llama-3.2-1B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo maicog/Llama-3.2-1B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q4_k_s.gguf -c 2048
```
|
BroAlanTaps/GPT2-large-4-24000steps
|
BroAlanTaps
| 2024-10-17T05:02:53Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T05:00:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
karthikmsk/deberta-base-v3-nli
|
karthikmsk
| 2024-10-17T05:01:23Z | 5 | 0 | null |
[
"safetensors",
"deberta-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-10-13T12:38:06Z |
---
license: apache-2.0
---
|
maicog/Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF
|
maicog
| 2024-10-17T04:43:15Z | 49 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-17T04:43:01Z |
---
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
library_name: transformers
license: llama3.2
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# maicog/Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.2-3B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo maicog/Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo maicog/Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo maicog/Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo maicog/Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_m-imat.gguf -c 2048
```
|
Waynetiang/my_awesome_eli5_mlm_model
|
Waynetiang
| 2024-10-17T04:31:31Z | 116 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-17T03:55:07Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.736 | 1.0 | 1332 | 2.1167 |
| 1.8882 | 2.0 | 2664 | 2.0456 |
| 2.016 | 3.0 | 3996 | 2.0335 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
BEGADE/chatbot
|
BEGADE
| 2024-10-17T04:29:31Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2024-10-17T03:50:05Z |
---
base_model: gpt2
library_name: peft
license: mit
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chatbot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
timaeus/H16-dh32
|
timaeus
| 2024-10-17T04:18:54Z | 6 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-17T04:15:48Z |
# H16-dh32 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
timaeus/H32-dh32
|
timaeus
| 2024-10-17T04:15:44Z | 6 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-10-17T04:11:24Z |
# H32-dh32 Checkpoints
This repository contains the final trained model and intermediate checkpoints.
- The main directory contains the fully trained model (checkpoint 75000).
- The `checkpoints` directory contains all intermediate checkpoints.
|
keffy04/my_awesome_eli5_mlm_model
|
keffy04
| 2024-10-17T04:07:06Z | 116 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-17T03:49:06Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2243 | 1.0 | 1326 | 2.0414 |
| 2.1729 | 2.0 | 2652 | 2.0209 |
| 2.1382 | 3.0 | 3978 | 2.0019 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
stablediffusionapi/bx-erotic-vision
|
stablediffusionapi
| 2024-10-17T04:02:26Z | 59 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-17T03:59:41Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# bx-Erotic Vision API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "bx-erotic-vision"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/bx-erotic-vision)
Model link: [View model](https://modelslab.com/models/bx-erotic-vision)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "bx-erotic-vision",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
MHGanainy/gpt2-xl-lora-multi-shared-512
|
MHGanainy
| 2024-10-17T04:01:09Z | 117 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2-xl",
"base_model:adapter:openai-community/gpt2-xl",
"license:mit",
"region:us"
] | null | 2024-10-16T20:05:36Z |
---
base_model: openai-community/gpt2-xl
library_name: peft
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-lora-multi-shared-512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-lora-multi-shared-512
This model is a fine-tuned version of [openai-community/gpt2-xl](https://huggingface.co/openai-community/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 69884
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
WillzWayn/sakura-card-flux-dev-lora
|
WillzWayn
| 2024-10-17T04:00:35Z | 10 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"dataset:WillzWayn/sakura-card-captor-cards",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-17T00:16:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: gusayn
datasets:
- WillzWayn/sakura-card-captor-cards
---
# Wayn Flux Dev Lora
## Trigger words
You should use `SAKURACARD` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WillzWayn/sakura-card-flux-dev-lora', weight_name='sakura-card-flux-dev-lora.safetensors')
image = pipeline('your prompt in a style of SAKURACARD').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aranguiz/dpo_llama_cs329x
|
aranguiz
| 2024-10-17T04:00:20Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-17T03:58:50Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Riheng/my_awesome_eli5_mlm_model
|
Riheng
| 2024-10-17T03:56:45Z | 181 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-17T03:41:12Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2553 | 1.0 | 1334 | 2.0462 |
| 2.1709 | 2.0 | 2668 | 2.0365 |
| 2.1217 | 3.0 | 4002 | 1.9829 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Raelisa/my_awesome_eli5_mlm_model
|
Raelisa
| 2024-10-17T03:53:06Z | 179 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-17T03:38:21Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2522 | 1.0 | 1323 | 2.0642 |
| 2.1739 | 2.0 | 2646 | 2.0282 |
| 2.1348 | 3.0 | 3969 | 2.0298 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
HINT-lab/llama3-8b-final-ppo-m-v0.3
|
HINT-lab
| 2024-10-17T03:52:00Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2410.09724",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-12T01:59:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
**PPO-M** (PPO with Calibrated Reward Modeling) is an RLHF algorithm to mitigate verbalized overconfidence in RLHF-trained Large Language Models.
PPO-M calibrates the reward modeling process by augmenting the binary pairwise ranking dataset with explicit confidence scores, and encourages the
reward model to align confidence levels with response quality.
Please refer to our preprint ([Taming Overconfidence in LLMs: Reward Calibration in RLHF](https://arxiv.org/abs/2410.09724)) and [repo](https://github.com/SeanLeng1/Reward-Calibration) for more details.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
We train [OpenRLHF/Llama-3-8b-sft-mixture](https://huggingface.co/OpenRLHF/Llama-3-8b-sft-mixture) on our [HINT-lab/prompt-collections-final-v0.3](https://huggingface.co/datasets/HINT-lab/prompt-collections-final-v0.3)
with our calibrated reward model [HINT-lab/llama3-8b-crm-final-v0.1](https://huggingface.co/HINT-lab/llama3-8b-crm-final-v0.1).
- **Developed by:** Jixuan Leng, Chengsong Huang, Banghua Zhu, Jiaxin Huang
- **Finetuned from model:** [OpenRLHF/Llama-3-8b-sft-mixture](https://huggingface.co/OpenRLHF/Llama-3-8b-sft-mixture)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Our repo](https://github.com/SeanLeng1/Reward-Calibration)
- **Paper:** [Taming Overconfidence in LLMs: Reward Calibration in RLHF](https://arxiv.org/abs/2410.09724)
<!-- - **Demo [optional]:** [More Information Needed] -->
|
shehnee/my_awesome_eli5_mlm_model
|
shehnee
| 2024-10-17T03:51:20Z | 179 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-17T03:36:14Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2412 | 1.0 | 1323 | 2.0432 |
| 2.177 | 2.0 | 2646 | 2.0161 |
| 2.1357 | 3.0 | 3969 | 2.0042 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
BEGADE/chat
|
BEGADE
| 2024-10-17T03:45:52Z | 30 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-13T01:32:01Z |
---
library_name: transformers
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chat
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Gummybear05/wav2vec2-E30_freq_pause
|
Gummybear05
| 2024-10-17T03:27:29Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-17T02:15:18Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E30_freq_pause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E30_freq_pause
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0467
- Cer: 28.3130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 28.57 | 0.1289 | 200 | 4.9399 | 100.0 |
| 4.9152 | 0.2579 | 400 | 4.7298 | 100.0 |
| 4.7776 | 0.3868 | 600 | 4.6311 | 98.1732 |
| 4.7311 | 0.5158 | 800 | 4.5605 | 97.6739 |
| 4.6426 | 0.6447 | 1000 | 4.5556 | 97.7032 |
| 4.5691 | 0.7737 | 1200 | 4.5028 | 97.4330 |
| 4.1847 | 0.9026 | 1400 | 3.9048 | 81.7375 |
| 3.1837 | 1.0316 | 1600 | 2.8792 | 57.0724 |
| 2.6116 | 1.1605 | 1800 | 2.4695 | 49.7827 |
| 2.2803 | 1.2895 | 2000 | 2.2168 | 43.7559 |
| 2.0438 | 1.4184 | 2200 | 1.9216 | 40.6074 |
| 1.8919 | 1.5474 | 2400 | 1.7582 | 39.0273 |
| 1.7295 | 1.6763 | 2600 | 1.6734 | 38.5103 |
| 1.5832 | 1.8053 | 2800 | 1.5192 | 34.3221 |
| 1.4426 | 1.9342 | 3000 | 1.4440 | 33.6642 |
| 1.3355 | 2.0632 | 3200 | 1.3543 | 33.4821 |
| 1.2131 | 2.1921 | 3400 | 1.2427 | 31.7669 |
| 1.1532 | 2.3211 | 3600 | 1.2136 | 31.8785 |
| 1.0948 | 2.4500 | 3800 | 1.1645 | 30.5804 |
| 1.0283 | 2.5790 | 4000 | 1.1471 | 29.8931 |
| 1.0085 | 2.7079 | 4200 | 1.0822 | 28.8181 |
| 0.9753 | 2.8369 | 4400 | 1.0493 | 28.3306 |
| 0.976 | 2.9658 | 4600 | 1.0467 | 28.3130 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
luigi86/magnum-v3-34b_mlx-8bpw
|
luigi86
| 2024-10-17T03:18:53Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2024-10-17T02:46:49Z |
---
license: apache-2.0
Language:
- En
Pipeline_tag: text-generation
Base_model: 01-ai/Yi-1.5-34B-32K
Tags:
- Chat
pipeline_tag: text-generation
library_name: transformers
model-index:
- name: magnum-v3-34b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 51.15
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v3-34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v3-34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 17.82
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v3-34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.77
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v3-34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.57
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v3-34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.69
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v3-34b
name: Open LLM Leaderboard
---
# MLX Format and Quantizations for Magnum v3 34b
Quantized to 8bpw and tested using the `mlx_lm` utility on a 64GiB URAM M1 Max.
- [4bpw quants](https://huggingface.co/luigi86/magnum-v3-34b_mlx-4bpw)
- [8bpw quants](https://huggingface.co/luigi86/magnum-v3-34b_mlx-8bpw)
See [original model](https://huggingface.co/anthracite-org/magnum-v3-34b) for further details.
# Original Model card

This is the 9th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [Yi-1.5-34 B-32 K](https://huggingface.co/01-ai/Yi-1.5-34B-32K).
## Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
```py
"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
In our testing a min_p of 0.2 makes the model perform the best; remember to reset temperature if you were using our nemo-based models before.
<details><summary>context template</summary>
```yaml
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Magnum ChatML"
}
```
</details><br>
<details><summary>instruct template</summary>
```yaml
{
"system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"last_output_sequence": "",
"system_sequence": "<|im_start|>system\n",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": true,
"names_force_groups": true,
"activation_regex": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"system_same_as_user": false,
"last_system_sequence": "",
"name": "Magnum ChatML"
}
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: 01-ai/Yi-1.5-34B-32K
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
#trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/stheno-filtered-v1.1
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: chatml
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: chatml
- path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: true
default_system_message: "You are an assistant that responds to the user."
dataset_prepared_path: magnum-v2-34b-1.5-data
val_set_size: 0.0
output_dir: ./magnum-v2-34b-32k-r1
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len:
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: magnum-v2-34b-1.5-32k
wandb_entity:
wandb_watch:
wandb_name: attempt-01
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000006
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 50
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
## Credits
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
We would also like to thank all members of Anthracite who made this finetune possible.
- [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
- [lodrick-the-lafted/NopmWritingStruct](https://huggingface.co/datasets/lodrick-the-lafted/NopmWritingStruct)
- [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
- [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
## Training
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
...
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_anthracite-org__magnum-v3-34b)
| Metric |Value|
|-------------------|----:|
|Avg. |29.39|
|IFEval (0-Shot) |51.15|
|BBH (3-Shot) |44.33|
|MATH Lvl 5 (4-Shot)|17.82|
|GPQA (0-shot) |14.77|
|MuSR (0-shot) | 6.57|
|MMLU-PRO (5-shot) |41.69|
|
bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF
|
bartowski
| 2024-10-17T02:53:49Z | 281 | 2 | null |
[
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:ProdeusUnity/Celestial-Harmony-14b-v1.0-Experimental-1016",
"base_model:quantized:ProdeusUnity/Celestial-Harmony-14b-v1.0-Experimental-1016",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-17T02:13:09Z |
---
base_model: ProdeusUnity/Celestial-Harmony-14b-v1.0-Experimental-1016
pipeline_tag: text-generation
tags:
- mergekit
- merge
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Celestial-Harmony-14b-v1.0-Experimental-1016
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization.
Original model: https://huggingface.co/ProdeusUnity/Celestial-Harmony-14b-v1.0-Experimental-1016
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-f16.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-f16.gguf) | f16 | 29.55GB | false | Full F16 weights. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q8_0.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q8_0.gguf) | Q8_0 | 15.70GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q6_K_L.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q6_K_L.gguf) | Q6_K_L | 12.50GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q6_K.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q6_K.gguf) | Q6_K | 12.12GB | false | Very high quality, near perfect, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q5_K_L.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q5_K_L.gguf) | Q5_K_L | 10.99GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q5_K_M.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q5_K_M.gguf) | Q5_K_M | 10.51GB | false | High quality, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q5_K_S.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q5_K_S.gguf) | Q5_K_S | 10.27GB | false | High quality, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_L.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_L.gguf) | Q4_K_L | 9.57GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_M.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_M.gguf) | Q4_K_M | 8.99GB | false | Good quality, default size for must use cases, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_XL.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_XL.gguf) | Q3_K_XL | 8.61GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_S.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_S.gguf) | Q4_K_S | 8.57GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0.gguf) | Q4_0 | 8.54GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0_8_8.gguf) | Q4_0_8_8 | 8.52GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0_4_8.gguf) | Q4_0_4_8 | 8.52GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_0_4_4.gguf) | Q4_0_4_4 | 8.52GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-IQ4_XS.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-IQ4_XS.gguf) | IQ4_XS | 8.12GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_L.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_L.gguf) | Q3_K_L | 7.92GB | false | Lower quality but usable, good for low RAM availability. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_M.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_M.gguf) | Q3_K_M | 7.34GB | false | Low quality. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-IQ3_M.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-IQ3_M.gguf) | IQ3_M | 6.92GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_S.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q3_K_S.gguf) | Q3_K_S | 6.66GB | false | Low quality, not recommended. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q2_K_L.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q2_K_L.gguf) | Q2_K_L | 6.53GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-IQ3_XS.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-IQ3_XS.gguf) | IQ3_XS | 6.38GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-Q2_K.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-Q2_K.gguf) | Q2_K | 5.77GB | false | Very low quality but surprisingly usable. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-IQ2_M.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-IQ2_M.gguf) | IQ2_M | 5.36GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Celestial-Harmony-14b-v1.0-Experimental-1016-IQ2_S.gguf](https://huggingface.co/bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/blob/main/Celestial-Harmony-14b-v1.0-Experimental-1016-IQ2_S.gguf) | IQ2_S | 5.00GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF --include "Celestial-Harmony-14b-v1.0-Experimental-1016-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF --include "Celestial-Harmony-14b-v1.0-Experimental-1016-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Celestial-Harmony-14b-v1.0-Experimental-1016-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
onlysainaa/cyrillic_to_script-t5-model
|
onlysainaa
| 2024-10-17T02:26:35Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"mn",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-15T02:08:52Z |
---
library_name: transformers
license: apache-2.0
language:
- mn
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Sainbayar B. (Б. Сайнбаяр) https://www.instagram.com/only_sainaa/]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [Mongolian Cyrillic to Traditional Mongolian Script conversion (Монгол кириллээс монгол бичиг рүү хөрвүүлэгч загвар)]
- **Language(s) (NLP):** [Mongolian /Монгол/]
- **License:** [More Information Needed]
- **Finetuned from model [google-t5-small]:** [More Information Needed]
```python
#Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("onlysainaa/cyrillic_to_script-t5-model")
model = AutoModelForSeq2SeqLM.from_pretrained("onlysainaa/cyrillic_to_script-t5-model")
#Check if CUDA (GPU) is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#Move the model to the same device (GPU or CPU)
model.to(device)
#Prepare text input
input_text = "сайн уу" #Mongolian greeting
#Tokenize the input text
inputs = tokenizer(input_text, return_tensors="pt")
#Move the input tensors to the same device as the model
inputs = {k: v.to(device) for k, v in inputs.items() if k in ['input_ids', 'attention_mask']}
#Generate translation
outputs = model.generate(**inputs)
#Decode the output to human-readable text
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
#Print the translated text
print(f"Translated Text: {translated_text}")
```
|
ConnorJiang/act_test
|
ConnorJiang
| 2024-10-17T02:16:54Z | 7 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] |
robotics
| 2024-10-17T02:11:41Z |
---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed]
|
devkyle/base-lora-v1
|
devkyle
| 2024-10-17T02:12:01Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:adapter:openai/whisper-base",
"license:apache-2.0",
"region:us"
] | null | 2024-10-11T05:26:10Z |
---
base_model: openai/whisper-base
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-base-akan-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-akan-v1
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0085
- eval_runtime: 32.8168
- eval_samples_per_second: 6.094
- eval_steps_per_second: 0.762
- epoch: 20.0
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.13.3.dev0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Wonder-Griffin/The_Judge
|
Wonder-Griffin
| 2024-10-17T02:06:31Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"text-generation-inference",
"base_model:Wonder-Griffin/JudgeLLM2",
"base_model:finetune:Wonder-Griffin/JudgeLLM2",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-08-26T13:51:13Z |
---
library_name: transformers
base_model: Wonder-Griffin/JudgeLLM2
tags:
- text-generation-inference
model-index:
- name: The_Judge
results: []
pipeline_tag: feature-extraction
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# The_Judge
This model is a fine-tuned version of [Wonder-Griffin/JudgeLLM2](https://huggingface.co/Wonder-Griffin/JudgeLLM2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu124
- Datasets 2.20.0
- Tokenizers 0.19.1
|
dimanoid12331/distilbert-NER_finetuned_on_mountines
|
dimanoid12331
| 2024-10-17T02:04:30Z | 8 | 0 | null |
[
"safetensors",
"distilbert",
"token-classification",
"en",
"base_model:dslim/distilbert-NER",
"base_model:finetune:dslim/distilbert-NER",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2024-10-16T22:50:45Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
base_model:
- dslim/distilbert-NER
pipeline_tag: token-classification
---
Ir is fine-tuned [DistilBERT-NER](https://huggingface.co/dslim/distilbert-NER) model with the classifier replaced to increase the number of classes from 9 to 11. Two additional classes is I-MOU and B-MOU what stands for mountine.
Inital new classifier inherited all weights and biases from original and add new beurons wirh weights initialized wirh xavier_uniform_
#### How to use
This model can be utilized with the Transformers *pipeline* for NER, similar to the BERT models.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dimanoid12331/distilbert-NER_finetuned_on_mountines")
model = AutoModelForTokenClassification.from_pretrained("dimanoid12331/distilbert-NER_finetuned_on_mountines")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
## Training data
This model was fine-tuned on English castom arteficial dataset with sentances wich contains mountains.
As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-MISC |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MISC | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location
B-MOU |Beginning of a Mountain right after another Mountain
I-MOU |Mountain
Sentences |Tokens
-|-
216 |2783
## Eval results
| Metric | Score |
|------------|-------|
| Loss | 0.2035|
| Precision | 0.8536|
| Recall | 0.7906|
| F1 | 0.7117|
| Accuracy | 0.7906|
|
Gummybear05/wav2vec2-E30_freq_pause1
|
Gummybear05
| 2024-10-17T01:54:43Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-16T17:31:54Z |
---
base_model: facebook/wav2vec2-xls-r-300m
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E30_freq_pause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E30_freq_pause
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2019
- Cer: 75.5110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 35.7962 | 0.1289 | 200 | 6.9049 | 100.0 |
| 14.8713 | 0.2579 | 400 | 4.9998 | 90.5193 |
| 10.3032 | 0.3868 | 600 | 4.5702 | 93.8440 |
| 8.67 | 0.5158 | 800 | 4.6552 | 92.9570 |
| 5.3475 | 0.6447 | 1000 | 4.5889 | 94.0437 |
| 4.9653 | 0.7737 | 1200 | 4.6626 | 93.9203 |
| 5.4367 | 0.9026 | 1400 | 4.4921 | 93.8322 |
| 5.1023 | 1.0316 | 1600 | 4.5898 | 93.5620 |
| 4.6675 | 1.1605 | 1800 | 4.3543 | 93.2801 |
| 4.9955 | 1.2895 | 2000 | 4.3195 | 92.9159 |
| 4.7843 | 1.4184 | 2200 | 4.2888 | 92.4871 |
| 4.7112 | 1.5474 | 2400 | 4.2545 | 92.2580 |
| 4.958 | 1.6763 | 2600 | 4.3499 | 85.8141 |
| 4.5195 | 1.8053 | 2800 | 4.2328 | 83.8522 |
| 4.7397 | 1.9342 | 3000 | 4.2644 | 82.5540 |
| 4.2707 | 2.0632 | 3200 | 4.3350 | 83.0298 |
| 4.6255 | 2.1921 | 3400 | 4.1961 | 81.6671 |
| 4.2181 | 2.3211 | 3600 | 4.1846 | 79.9107 |
| 4.6953 | 2.4500 | 3800 | 4.3664 | 74.8590 |
| 4.2375 | 2.5790 | 4000 | 4.5549 | 74.7650 |
| 4.186 | 2.7079 | 4200 | 4.2655 | 75.3818 |
| 4.2477 | 2.8369 | 4400 | 4.1909 | 76.5625 |
| 4.2201 | 2.9658 | 4600 | 4.2019 | 75.5110 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Sonlen/mi_modelo
|
Sonlen
| 2024-10-17T01:50:41Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T01:14:47Z |
---
base_model: google-bert/bert-base-multilingual-cased
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mi_modelo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi_modelo
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9929 | 1.0 | 4321 | 0.5993 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
lfch1030/El_modelo_talento_tech_prueba
|
lfch1030
| 2024-10-17T01:49:22Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T01:24:24Z |
---
base_model: google-bert/bert-base-multilingual-cased
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: El_modelo_talento_tech_prueba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# El_modelo_talento_tech_prueba
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8602 | 1.0 | 4321 | 0.4928 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
duyntnet/Nous-Hermes-13b-imatrix-GGUF
|
duyntnet
| 2024-10-17T01:44:29Z | 93 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"Nous-Hermes-13b",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-10-16T20:30:57Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Nous-Hermes-13b
---
Quantizations of https://huggingface.co/NousResearch/Nous-Hermes-13b
### Inference Clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [ollama](https://github.com/ollama/ollama)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [GPT4All](https://github.com/nomic-ai/gpt4all)
* [jan](https://github.com/janhq/jan)
---
# From original readme
## Model Description
Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks.
This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours.
## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions.
Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
### Response:
```
or
```
### Instruction:
### Input:
### Response:
```
|
KThellez/mi_modelo
|
KThellez
| 2024-10-17T01:42:20Z | 112 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T01:18:12Z |
---
base_model: google-bert/bert-base-multilingual-cased
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mi_modelo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi_modelo
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.947 | 1.0 | 4321 | 0.5149 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
ankner/Llama3-8B-Classic-RM
|
ankner
| 2024-10-17T01:41:59Z | 6 | 0 | null |
[
"safetensors",
"cloud",
"arxiv:2408.11791",
"region:us"
] | null | 2024-09-04T17:28:16Z |
---
{}
---
<h1 align="center">
<u>C</u>ritique-out-<u>Loud</u> Reward Models (CLoud)
</h1>
<p align="center">
<img src="CLoud.gif" alt="CLoud"/>
</p>
<p align="center">
| <a href="https://arxiv.org/abs/2408.11791"><b>Paper</b></a> | <a href="https://x.com/ZackAnkner/status/1826607200376336478"> <b>Tweet</b> </a> |
</p>
---
## Introduction
<u>C</u>ritique-out-<u>Loud</u> reward models are reward models that can reason explicitly about the quality of an input through producing Chain-of-Thought like critiques of an input before predicting a reward.
In classic reward model training, the reward model is trained as a reward head initialized on top of the base LLM.
Without LM capabilities, classic reward models act as encoders and must predict rewards within a single forward pass through the model, meaning reasoning must happen implicitly.
In contrast, CLoud reward models are trained to both produce explicit reasoning about quality and to score based on these critique reasoning traces.
CLoud reward models lead to large gains for pairwise preference modeling on RewardBench, and also lead to large gains in win rate when used as the scoring model in Best-of-N sampling on ArenaHard.
## Todo
- [x] Release models and inference examples
- [ ] Post example training run logs
- [ ] Add ArenaHard evaluation code
- [ ] Add VLLM support for inference
## Table of Contents
- [Introduction](#introduction)
- [Todo](#todo)
- [Table of Contents](#table-of-contents)
- [Setup](#setup)
- [Model Weights](#model-weights)
- [Inference](#inference)
- [Dataset](#dataset)
- [Training](#training)
- [CLoud Training](#cloud-training)
- [Classic Training](#classic-training)
- [Evaluation](#evaluation)
- [Citation](#citation)
## Setup
```bash
git clone https://github.com/zankner/CLoud
cd CLoud
pip install -e .
```
Optional: base docker image used during development `mosaicml/pytorch:2.3.0_cu121-python3.11-ubuntu20.04`
## Model Weights
| Base Model | RM Type | Hugging Face Repo |
| ---------- | --------------- |--------------------------------------------------------------------- |
| Llama3-8B | Classic | [ankner/Llama3-8B-Classic-RM](https://huggingface.co/ankner/Llama3-8B-Classic-RM) |
| Llama3-8B | CLoud | [ankner/Llama3-8B-CLoud-RM](https://huggingface.co/ankner/Llama3-8B-CLoud-RM) |
| Llama3-70B | Classic | [ankner/Llama3-70B-Classic-RM](https://huggingface.co/ankner/Llama3-70B-Classic-RM) |
| Llama3-70B | CLoud | [ankner/Llama3-70B-CLoud-RM](https://huggingface.co/ankner/Llama3-70B-CLoud-RM) |
## Inference
We provide a gradio demo which can be run as follows: `gradio cloud/demo.py`. By default this will demo `ankner/Llama3-8B-CLoud-RM`, but you can change the model loaded in the script.
If you want to perform inference on your own data, please refer to the following example:
```python
from cloud.model import CLoudRewardModel
from transformers import AutoTokenizer
model_name = "ankner/Llama3-8B-Cloud-RM" # Replace with RM trained with this repo
model = CLoudRewardModel.from_pretrained(model_name, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
user_prompt = [
"Write me a story",
"What is the capital of the moon?"
]
assistant_response = [
"No I don't want to do that.",
"Since the moon is made out of cheese, the capital is mozzerella."
]
rewards, critiques = model.predict_reward(user_prompt, assistant_response, tokenizer)
for reward, critique in zip(rewards, critiques):
print("Critique:")
print(critique)
print("Reward:")
print(reward)
print("=" * 100)
```
## Dataset
We provide code to reconstruct the datasets used in the paper.
There are two datasets to build for training, one with oracle critiques meant to simmulate human feedback and one with self-generated critiques.
To build the oracle critique dataset run:
```bash
python cloud/data/build_official_ultra_llama.py --mode oracle
```
To build the self-generated critique dataset run:
```bash
python cloud/data/build_official_ultra_llama.py --mode self-gen --model-size {model-size}
```
where ```{model-size}``` is the size of the model you are using (e.g. 8b, 70b).
<details>
<summary>Build your own dataset from scratch</summary>
1. <b>Build prompts</b> - You can use any dataset you like as long as it has ```prompt``` and ```id``` columns. If you would like to build prompts from UltraFeedback and UltraInteract as we do in the paper run:
```bash
python cloud/data/build_ultra_prompts.py --save-name {name-to-save-as}
```
2. <b>Build chosen / rejected responses</b>
```bash
python cloud/data/build_judgements.py --gen-model {model-generating-responses} --judge-model {model-judging-responses} --base-dataset {path-to-prompt-dataset} --save-name {name-to-save-as}
```
The above command requires a hosted generating and judging model. To host the models using vllm run:
```bash
python -m vllm.entrypoints.openai.api_server --model {path-to-gen/judge-model} --dtype bfloat16 --tensor-parallel-size {num-gpus} --port {8000 for gen and 8001 for judge}
```
3. <b>Build critiques</b>
```bash
python cloud/data/generate_oracle_critiques.py --judge-model {model-generating-critiques} --base-dataset {path-to-responses-dataset} --save-name {name-to-save-as}
```
Again, this command assumes a hosted critique model. To host the critique model you can use the above vllm command (This time just use port 8000 for the judge model).
</details>
## Training
Before training, you must run the [setup script](#setup) and build the [datasets](#dataset).
The training configs are located in the ```cloud/train/configs/``` folder.
We have already set the optimal hyperparameters that we found for each model as reported in the paper.
The only parameter that needs to be set is the ```variables.micro_batch_size``` parameter, in accordance with your GPU memory.
If you want to log the training runs, uncomment the ```loggers``` section in the config and fill in your wandb settings.
Checkpoints will be saved throughout training to the ```save_folder``` parameter, which is ```ckpts/${variables.run_name}``` by default. The final checkpoint will contain a folder ```hf``` where the huggingface model is saved.
> **Warning**: The below training scripts for both CLoud and Classic prefill the dataset names to be the datasets we release. If you would like to train on your own dataset, you will need to follow the directions to build said dataset in the [dataset section](#dataset) and change the ```variables.dataset_path``` parameter in the training configs.
### CLoud Training
1. The first step is to finetune the base model to produce critiques:
```bash
composer -n {num_gpus} cloud/train/train.py cloud/train/configs/{model_size}_critique_sft.yaml
```
Replace ```{model_size}``` with the size of the model you are training (e.g. 8b, 70b).
2. (Optional if you want to use the self-generated data we release) After the critique SFT model is trained, you need to regenerate the dataset with the critiques.
To do so, you first need to serve the critique SFT model. To do so locally using vllm run:
```bash
python -m vllm.entrypoints.openai.api_server --model {path-to-critique-sft-model} --dtype bfloat16 --tensor-parallel-size {num-gpus}
```
Then run the data building script:
```bash
python cloud/data/generate_self_critiques.py --model {path-to-critique-sft-model} --base-dataset {path-to-base-dataset} --upload-name {path-to-save-dataset}
```
3. After building the self-generated dataset, we can train the CLoud model:
```bash
composer -n {num_gpus} cloud/train/train.py cloud/train/configs/{model_size}_cloud.yaml
```
### Classic Training
To train a classic reward model, you can use the following command:
```bash
composer -n {num_gpus} cloud/train/train.py cloud/train/configs/{model_size}_classic.yaml
```
## Evaluation
To run evaluation for a given benchmark run the following command:
```bash
python cloud/eval/eval.py --model-path {path-to-model} --benchmark {benchmark-name}
```
Currently, we only support the RewardBench benchmark.
## Citation
If you found our work useful please consider citing it:
```bibtex
@misc{ankner2024critiqueoutloudrewardmodels,
title={Critique-out-Loud Reward Models},
author={Zachary Ankner and Mansheej Paul and Brandon Cui and Jonathan D. Chang and Prithviraj Ammanabrolu},
year={2024},
eprint={2408.11791},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.11791},
}
```
|
heisenburgerking/heisenbergllama3.1
|
heisenburgerking
| 2024-10-17T01:36:07Z | 7 | 0 | null |
[
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"krx",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] |
text-generation
| 2024-10-15T13:37:02Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- krx
extra_gated_prompt: >-
### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 3.1 distributed by Meta at
https://llama.meta.com/doc/overview.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity’s behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 3.1" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Llama 3.1 and
Documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Meta’s intellectual
property or other rights owned by Meta embodied in the Llama Materials to use,
reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative
works thereof), or a product or service (including another AI model) that
contains any of them, you shall (A) provide a copy of this Agreement with any
such Llama Materials; and (B) prominently display “Built with Llama” on a
related website, user interface, blogpost, about page, or product
documentation. If you use the Llama Materials or any outputs or results of the
Llama Materials to create, train, fine tune, or otherwise improve an AI model,
which is distributed or made available, you shall also include “Llama” at the
beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a “Notice” text file distributed as a
part of such copies: “Llama 3.1 is licensed under the Llama 3.1 Community
License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by
reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee’s affiliates, is greater than 700 million monthly active
users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized
to exercise any of the rights under this Agreement unless or until Meta
otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”
BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF
ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A
PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE
APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY
RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS
LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS
OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE
DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY
OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a
license to use “Llama” (the “Mark”) solely as required to comply with the last
sentence of Section 1.b.i. You will comply with Meta’s brand guidelines
(currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill
arising out of your use of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the Llama
Materials or Llama 3.1 outputs or results, or any portion of any of the
foregoing, constitutes infringement of intellectual property or other rights
owned or licensable by you, then any licenses granted to you under this
Agreement shall terminate as of the date such litigation or claim is filed or
instituted. You will indemnify and hold harmless Meta from and against any
claim by any third party arising out of or related to your use or distribution
of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall
have exclusive jurisdiction of any dispute arising out of this Agreement.
### Llama 3.1 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 3.1. If you access or use Llama 3.1, you agree to this
Acceptable Use Policy (“Policy”). The most recent copy of this policy can be
found at
[https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)
#### Prohibited Uses
We want everyone to use Llama 3.1 safely and responsibly. You agree you will
not use, or allow others to use, Llama 3.1 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 3.1 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.1 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 3.1 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI
system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Input modalities</strong>
</td>
<td><strong>Output modalities</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="3" >Llama 3.1 (text only)
</td>
<td rowspan="3" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
<td rowspan="3" >15T+
</td>
<td rowspan="3" >December 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
<tr>
<td>405B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
</table>
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** July 23, 2024.
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
**<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Tool use with transformers
LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
Here is a quick example showing a single simple tool:
```python
# First, define a tool
def get_current_temperature(location: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22. # A real function should probably actually get the temperature!
# Next, create a chat and apply the chat template
messages = [
{"role": "system", "content": "You are a bot that responds to weather queries."},
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
```
You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
```python
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
```
and then call the tool and append the result, with the `tool` role, like so:
```python
messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
```
After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
<table>
<tr>
<td>
</td>
<td><strong>Training Time (GPU hours)</strong>
</td>
<td><strong>Training Power Consumption (W)</strong>
</td>
<td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
<td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3.1 8B
</td>
<td>1.46M
</td>
<td>700
</td>
<td>420
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 70B
</td>
<td>7.0M
</td>
<td>700
</td>
<td>2,040
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 405B
</td>
<td>30.84M
</td>
<td>700
</td>
<td>8,930
</td>
<td>0
</td>
</tr>
<tr>
<td>Total
</td>
<td>39.3M
<td>
<ul>
</ul>
</td>
<td>11,390
</td>
<td>0
</td>
</tr>
</table>
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023.
## Benchmark scores
In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="7" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>66.7
</td>
<td>66.7
</td>
<td>79.5
</td>
<td>79.3
</td>
<td>85.2
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>36.2
</td>
<td>37.1
</td>
<td>55.0
</td>
<td>53.8
</td>
<td>61.6
</td>
</tr>
<tr>
<td>AGIEval English
</td>
<td>3-5
</td>
<td>average/acc_char
</td>
<td>47.1
</td>
<td>47.8
</td>
<td>63.0
</td>
<td>64.6
</td>
<td>71.6
</td>
</tr>
<tr>
<td>CommonSenseQA
</td>
<td>7
</td>
<td>acc_char
</td>
<td>72.6
</td>
<td>75.0
</td>
<td>83.8
</td>
<td>84.1
</td>
<td>85.8
</td>
</tr>
<tr>
<td>Winogrande
</td>
<td>5
</td>
<td>acc_char
</td>
<td>-
</td>
<td>60.5
</td>
<td>-
</td>
<td>83.3
</td>
<td>86.7
</td>
</tr>
<tr>
<td>BIG-Bench Hard (CoT)
</td>
<td>3
</td>
<td>average/em
</td>
<td>61.1
</td>
<td>64.2
</td>
<td>81.3
</td>
<td>81.6
</td>
<td>85.9
</td>
</tr>
<tr>
<td>ARC-Challenge
</td>
<td>25
</td>
<td>acc_char
</td>
<td>79.4
</td>
<td>79.7
</td>
<td>93.1
</td>
<td>92.9
</td>
<td>96.1
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki
</td>
<td>5
</td>
<td>em
</td>
<td>78.5
</td>
<td>77.6
</td>
<td>89.7
</td>
<td>89.8
</td>
<td>91.8
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD
</td>
<td>1
</td>
<td>em
</td>
<td>76.4
</td>
<td>77.0
</td>
<td>85.6
</td>
<td>81.8
</td>
<td>89.3
</td>
</tr>
<tr>
<td>QuAC (F1)
</td>
<td>1
</td>
<td>f1
</td>
<td>44.4
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>51.1
</td>
<td>53.6
</td>
</tr>
<tr>
<td>BoolQ
</td>
<td>0
</td>
<td>acc_char
</td>
<td>75.7
</td>
<td>75.0
</td>
<td>79.0
</td>
<td>79.4
</td>
<td>80.0
</td>
</tr>
<tr>
<td>DROP (F1)
</td>
<td>3
</td>
<td>f1
</td>
<td>58.4
</td>
<td>59.5
</td>
<td>79.7
</td>
<td>79.6
</td>
<td>84.8
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B Instruct</strong>
</td>
<td><strong>Llama 3.1 8B Instruct</strong>
</td>
<td><strong>Llama 3 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 405B Instruct</strong>
</td>
</tr>
<tr>
<td rowspan="4" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc
</td>
<td>68.5
</td>
<td>69.4
</td>
<td>82.0
</td>
<td>83.6
</td>
<td>87.3
</td>
</tr>
<tr>
<td>MMLU (CoT)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>65.3
</td>
<td>73.0
</td>
<td>80.9
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>micro_avg/acc_char
</td>
<td>45.5
</td>
<td>48.3
</td>
<td>63.4
</td>
<td>66.4
</td>
<td>73.3
</td>
</tr>
<tr>
<td>IFEval
</td>
<td>
</td>
<td>
</td>
<td>76.8
</td>
<td>80.4
</td>
<td>82.9
</td>
<td>87.5
</td>
<td>88.6
</td>
</tr>
<tr>
<td rowspan="2" >Reasoning
</td>
<td>ARC-C
</td>
<td>0
</td>
<td>acc
</td>
<td>82.4
</td>
<td>83.4
</td>
<td>94.4
</td>
<td>94.8
</td>
<td>96.9
</td>
</tr>
<tr>
<td>GPQA
</td>
<td>0
</td>
<td>em
</td>
<td>34.6
</td>
<td>30.4
</td>
<td>39.5
</td>
<td>46.7
</td>
<td>50.7
</td>
</tr>
<tr>
<td rowspan="4" >Code
</td>
<td>HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>60.4
</td>
<td>72.6
</td>
<td>81.7
</td>
<td>80.5
</td>
<td>89.0
</td>
</tr>
<tr>
<td>MBPP ++ base version
</td>
<td>0
</td>
<td>pass@1
</td>
<td>70.6
</td>
<td>72.8
</td>
<td>82.5
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>Multipl-E HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>50.8
</td>
<td>-
</td>
<td>65.5
</td>
<td>75.2
</td>
</tr>
<tr>
<td>Multipl-E MBPP
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>52.4
</td>
<td>-
</td>
<td>62.0
</td>
<td>65.7
</td>
</tr>
<tr>
<td rowspan="2" >Math
</td>
<td>GSM-8K (CoT)
</td>
<td>8
</td>
<td>em_maj1@1
</td>
<td>80.6
</td>
<td>84.5
</td>
<td>93.0
</td>
<td>95.1
</td>
<td>96.8
</td>
</tr>
<tr>
<td>MATH (CoT)
</td>
<td>0
</td>
<td>final_em
</td>
<td>29.1
</td>
<td>51.9
</td>
<td>51.0
</td>
<td>68.0
</td>
<td>73.8
</td>
</tr>
<tr>
<td rowspan="4" >Tool Use
</td>
<td>API-Bank
</td>
<td>0
</td>
<td>acc
</td>
<td>48.3
</td>
<td>82.6
</td>
<td>85.1
</td>
<td>90.0
</td>
<td>92.0
</td>
</tr>
<tr>
<td>BFCL
</td>
<td>0
</td>
<td>acc
</td>
<td>60.3
</td>
<td>76.1
</td>
<td>83.0
</td>
<td>84.8
</td>
<td>88.5
</td>
</tr>
<tr>
<td>Gorilla Benchmark API Bench
</td>
<td>0
</td>
<td>acc
</td>
<td>1.7
</td>
<td>8.2
</td>
<td>14.7
</td>
<td>29.7
</td>
<td>35.3
</td>
</tr>
<tr>
<td>Nexus (0-shot)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>18.1
</td>
<td>38.5
</td>
<td>47.8
</td>
<td>56.7
</td>
<td>58.7
</td>
</tr>
<tr>
<td>Multilingual
</td>
<td>Multilingual MGSM (CoT)
</td>
<td>0
</td>
<td>em
</td>
<td>-
</td>
<td>68.9
</td>
<td>-
</td>
<td>86.9
</td>
<td>91.6
</td>
</tr>
</table>
#### Multilingual benchmarks
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Language</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="9" ><strong>General</strong>
</td>
<td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
</td>
<td>Portuguese
</td>
<td>62.12
</td>
<td>80.13
</td>
<td>84.95
</td>
</tr>
<tr>
<td>Spanish
</td>
<td>62.45
</td>
<td>80.05
</td>
<td>85.08
</td>
</tr>
<tr>
<td>Italian
</td>
<td>61.63
</td>
<td>80.4
</td>
<td>85.04
</td>
</tr>
<tr>
<td>German
</td>
<td>60.59
</td>
<td>79.27
</td>
<td>84.36
</td>
</tr>
<tr>
<td>French
</td>
<td>62.34
</td>
<td>79.82
</td>
<td>84.66
</td>
</tr>
<tr>
<td>Hindi
</td>
<td>50.88
</td>
<td>74.52
</td>
<td>80.31
</td>
</tr>
<tr>
<td>Thai
</td>
<td>50.32
</td>
<td>72.95
</td>
<td>78.21
</td>
</tr>
</table>
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.1 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.1 systems
**Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### New capabilities
Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical and other risks
We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
**2. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3. Cyber attack enablement**
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
Mubin1917/Fhi-3.5-mini-instruct-2
|
Mubin1917
| 2024-10-17T01:33:08Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-04T04:11:34Z |
---
base_model: unsloth/Phi-3.5-mini-instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# This page is work in progress!
## Overview
The **Fhi-3.5-mini-instruct** is a fine-tuned version of the [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) model, optimized for function-calling.
### Usage
Here’s a basic example of how to use function calling with the Fhi-3.5-mini-instruct model:
```python
def get_current_temperature(location: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22.
# Create the messages list
messages = [
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What's the current weather in London and New York? Please use Celsius."}
]
# Apply the chat template
prompt = tokenizer.apply_chat_template(
messages,
tools=[get_current_temperature], # Pass the custom tool
add_generation_prompt=True,
tokenize=False
)
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, use_cache=True, temperature=0.001, top_p=1, eos_token_id=[32007])
resu = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
print(resu)
```
The result will look like this:
```python
[
{'name': 'get_current_temperature', 'arguments': {'location': 'London, UK'}},
{'name': 'get_current_temperature', 'arguments': {'location': 'New York, USA'}}
]
```
## Testing and Benchmarking
This model is still undergoing testing and evaluation. Use it at your own risk until further validation is complete. Performance on benchmarks like MMLU and MMLU-Pro will be updated soon.
| Benchmark | Fhi-3.5 Mini-Ins | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | ____ | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| MMLU (5-shot) | __ | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| MMLU-Pro (3-shot, CoT) |__| 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
<!-- | Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
-->
<!-- The table below shows Multilingual MMLU scores in some of the supported languages.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
-->
## Credits
Will be updated soon
|
win10/Mistral-Nemo-Base-2407-20b
|
win10
| 2024-10-17T01:28:44Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:finetune:unsloth/Mistral-Nemo-Base-2407",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T01:21:47Z |
---
base_model:
- unsloth/Mistral-Nemo-Base-2407
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 2]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [1, 3]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [2, 4]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [3, 5]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
# 以下是新增的層
- sources:
- layer_range: [4, 6]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [5, 7]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [6, 8]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [7, 9]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 10]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [9, 11]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [10, 12]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [11, 13]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [12, 14]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [13, 15]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [14, 16]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [15, 17]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [16, 18]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [17, 19]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [18, 20]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [19, 21]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [20, 22]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [21, 23]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [22, 24]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [23, 25]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 26]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [25, 27]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [26, 28]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [27, 29]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [28, 30]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [29, 31]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [30, 32]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [31, 33]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [32, 34]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [33, 35]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [34, 36]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [35, 37]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [36, 38]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [37, 39]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [38, 40]
model: unsloth/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf
|
RichardErkhov
| 2024-10-17T01:28:37Z | 14 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T20:58:07Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
FinChat298B_Llama-3.2-4Bi - GGUF
- Model creator: https://huggingface.co/gkMSDA/
- Original model: https://huggingface.co/gkMSDA/FinChat298B_Llama-3.2-4Bi/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [FinChat298B_Llama-3.2-4Bi.Q2_K.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q2_K.gguf) | Q2_K | 2.96GB |
| [FinChat298B_Llama-3.2-4Bi.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [FinChat298B_Llama-3.2-4Bi.IQ3_S.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [FinChat298B_Llama-3.2-4Bi.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [FinChat298B_Llama-3.2-4Bi.IQ3_M.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [FinChat298B_Llama-3.2-4Bi.Q3_K.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q3_K.gguf) | Q3_K | 3.74GB |
| [FinChat298B_Llama-3.2-4Bi.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [FinChat298B_Llama-3.2-4Bi.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [FinChat298B_Llama-3.2-4Bi.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [FinChat298B_Llama-3.2-4Bi.Q4_0.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q4_0.gguf) | Q4_0 | 4.34GB |
| [FinChat298B_Llama-3.2-4Bi.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [FinChat298B_Llama-3.2-4Bi.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [FinChat298B_Llama-3.2-4Bi.Q4_K.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q4_K.gguf) | Q4_K | 4.58GB |
| [FinChat298B_Llama-3.2-4Bi.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [FinChat298B_Llama-3.2-4Bi.Q4_1.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q4_1.gguf) | Q4_1 | 4.78GB |
| [FinChat298B_Llama-3.2-4Bi.Q5_0.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q5_0.gguf) | Q5_0 | 5.21GB |
| [FinChat298B_Llama-3.2-4Bi.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [FinChat298B_Llama-3.2-4Bi.Q5_K.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q5_K.gguf) | Q5_K | 5.34GB |
| [FinChat298B_Llama-3.2-4Bi.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [FinChat298B_Llama-3.2-4Bi.Q5_1.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q5_1.gguf) | Q5_1 | 5.65GB |
| [FinChat298B_Llama-3.2-4Bi.Q6_K.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q6_K.gguf) | Q6_K | 6.14GB |
| [FinChat298B_Llama-3.2-4Bi.Q8_0.gguf](https://huggingface.co/RichardErkhov/gkMSDA_-_FinChat298B_Llama-3.2-4Bi-gguf/blob/main/FinChat298B_Llama-3.2-4Bi.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** gkMSDA
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Marius-L/bbs-02
|
Marius-L
| 2024-10-17T01:27:16Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-06T00:55:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moonjongsul/llama3_2_ko_8b_rs1
|
moonjongsul
| 2024-10-17T01:20:57Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-17T01:19:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MatteoVan/donut-demo
|
MatteoVan
| 2024-10-17T01:20:55Z | 54 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-16T23:59:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF
|
ZeroXClem
| 2024-10-17T01:19:41Z | 11 | 2 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Hermes3",
"SuperNovaLite",
"Purosani",
"Llama3.1",
"kotyKD/Llama3.1-Hermes3-SuperNovaLite-merged-with-base-8B",
"djuna/L3.1-Purosani-2-8B",
"instruction-following",
"long-form-generation",
"roleplay",
"storytelling",
"llama-cpp",
"gguf-my-repo",
"base_model:ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B",
"base_model:quantized:ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-17T01:19:16Z |
---
base_model: ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Hermes3
- SuperNovaLite
- Purosani
- Llama3.1
- kotyKD/Llama3.1-Hermes3-SuperNovaLite-merged-with-base-8B
- djuna/L3.1-Purosani-2-8B
- instruction-following
- long-form-generation
- roleplay
- storytelling
- llama-cpp
- gguf-my-repo
---
# ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B`](https://huggingface.co/ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF --hf-file llama3.1-hermes3-supernova-8b-l3.1-purosani-2-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF --hf-file llama3.1-hermes3-supernova-8b-l3.1-purosani-2-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF --hf-file llama3.1-hermes3-supernova-8b-l3.1-purosani-2-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF --hf-file llama3.1-hermes3-supernova-8b-l3.1-purosani-2-8b-q5_k_m.gguf -c 2048
```
|
RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf
|
RichardErkhov
| 2024-10-17T01:19:30Z | 9 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T22:03:32Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephyr-sft-bnb-4bit-DPO-mtbc-213steps - GGUF
- Model creator: https://huggingface.co/sonthenguyen/
- Original model: https://huggingface.co/sonthenguyen/zephyr-sft-bnb-4bit-DPO-mtbc-213steps/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q2_K.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q2_K.gguf) | Q2_K | 2.53GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K.gguf) | Q3_K | 3.28GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_0.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_0.gguf) | Q4_0 | 3.83GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_K.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_K.gguf) | Q4_K | 4.07GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_1.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q4_1.gguf) | Q4_1 | 4.24GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_0.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_0.gguf) | Q5_0 | 4.65GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_K.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_K.gguf) | Q5_K | 4.78GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_1.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q5_1.gguf) | Q5_1 | 5.07GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q6_K.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q6_K.gguf) | Q6_K | 5.53GB |
| [zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q8_0.gguf](https://huggingface.co/RichardErkhov/sonthenguyen_-_zephyr-sft-bnb-4bit-DPO-mtbc-213steps-gguf/blob/main/zephyr-sft-bnb-4bit-DPO-mtbc-213steps.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
TrainOutput(global_step=213, training_loss=0.09253080396371667, metrics={'train_runtime': 1906.7032, 'train_samples_per_second': 1.791, 'train_steps_per_second': 0.112, 'total_flos': 0.0, 'train_loss': 0.09253080396371667, 'epoch': 0.4991212653778559})
---
base_model: unsloth/zephyr-sft-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- dpo
---
# Uploaded model
- **Developed by:** sonthenguyen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nm-testing/Phi-3.5-vision-instruct-W8A8-Dynamic-Per-Token
|
nm-testing
| 2024-10-17T01:02:00Z | 19 | 0 | null |
[
"safetensors",
"phi3_v",
"custom_code",
"base_model:microsoft/Phi-3.5-vision-instruct",
"base_model:quantized:microsoft/Phi-3.5-vision-instruct",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2024-10-17T00:42:37Z |
---
base_model:
- microsoft/Phi-3.5-vision-instruct
---
## Eval
```
vllm serve nm-testing/Phi-3.5-vision-instruct-W8A8-Dynamic-Per-Token --trust-remote-code --max-model-len 100000
```
```
python -m eval.run eval_vllm --model_name nm-testing/Phi-3.5-vision-instruct-W8A8-Dynamic-Per-Token --url http://0.0.0.0:8000 --output_dir output/ --eval_name "chartqa"
...
================================================================================
Metrics:
{
"explicit_prompt_relaxed_correctness": 0.6472,
"anywhere_in_answer_relaxed_correctness": 0.6616
}
================================================================================
```
## Creation
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from llmcompressor.modifiers.quantization import GPTQModifier
# from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot, wrap_hf_model_class
# Select model and load it.
MODEL_ID = "microsoft/Phi-3.5-vision-instruct"
model_class = wrap_hf_model_class(AutoModelForCausalLM)
model = model_class.from_pretrained(
MODEL_ID,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
_attn_implementation="eager",
)
processor = AutoTokenizer.from_pretrained(MODEL_ID, trust_remote_code=True)
# Select calibration dataset.
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
# Select number of samples. 512 samples is a good place to start.
# Increasing the number of samples can improve accuracy.
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
def preprocess(example):
return {
"text": processor.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
# Tokenize inputs.
def tokenize(sample):
return processor(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
print(ds)
# Configure algorithms. In this case, we:
# * apply SmoothQuant to make the activations easier to quantize
# * quantize the weights to int8 with GPTQ (static per channel)
# * quantize the activations to int8 (dynamic per token)
# Note: set sequential_update: true in the recipe to reduce memory
ignore=["re:.*lm_head", "re:model.vision_embed_tokens.*"]
recipe = [
# SmoothQuantModifier(smoothing_strength=0.8, ignore=ignore),
GPTQModifier(targets="Linear", scheme="W8A8", ignore=ignore),
]
# Apply algorithms.
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
)
# Confirm generations of the quantized model look sane.
print("\n\n")
print("========== SAMPLE GENERATION ==============")
input_ids = processor("Hello my name is", return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_new_tokens=100)
print(processor.decode(output[0]))
print("==========================================\n\n")
# Save to disk compressed.
SAVE_DIR = MODEL_ID.split("/")[1] + "-W8A8-Dynamic-Per-Token"
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)
```
|
linyongver/DPO_gemma-2b-it
|
linyongver
| 2024-10-17T00:56:22Z | 239 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-17T00:50:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmussio/gemma-2-clinical-2b
|
lmussio
| 2024-10-17T00:53:06Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T00:45:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
win10/phi-3.5-sakura-yuzu-v3.0
|
win10
| 2024-10-17T00:38:09Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2306.01708",
"base_model:AXCXEPT/Borea-Phi-3.5-mini-Instruct-Coding",
"base_model:merge:AXCXEPT/Borea-Phi-3.5-mini-Instruct-Coding",
"base_model:ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1",
"base_model:merge:ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1",
"base_model:FreedomIntelligence/Apollo2-3.8B",
"base_model:merge:FreedomIntelligence/Apollo2-3.8B",
"base_model:MaziyarPanahi/calme-2.1-phi3.5-4b",
"base_model:merge:MaziyarPanahi/calme-2.1-phi3.5-4b",
"base_model:bunnycore/Phi-3.1-EvolKit-lora",
"base_model:merge:bunnycore/Phi-3.1-EvolKit-lora",
"base_model:bunnycore/Phi-3.5-mini-System-lora",
"base_model:merge:bunnycore/Phi-3.5-mini-System-lora",
"base_model:win10/Phi-3.5-mini-instruct-24-9-29",
"base_model:merge:win10/Phi-3.5-mini-instruct-24-9-29",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T00:35:04Z |
---
base_model:
- win10/Phi-3.5-mini-instruct-24-9-29
- MaziyarPanahi/calme-2.1-phi3.5-4b
- bunnycore/Phi-3.1-EvolKit-lora
- AXCXEPT/Borea-Phi-3.5-mini-Instruct-Coding
- bunnycore/Phi-3.1-EvolKit-lora
- ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
- bunnycore/Phi-3.5-mini-System-lora
- FreedomIntelligence/Apollo2-3.8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [win10/Phi-3.5-mini-instruct-24-9-29](https://huggingface.co/win10/Phi-3.5-mini-instruct-24-9-29) as a base.
### Models Merged
The following models were included in the merge:
* [MaziyarPanahi/calme-2.1-phi3.5-4b](https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b) + [bunnycore/Phi-3.1-EvolKit-lora](https://huggingface.co/bunnycore/Phi-3.1-EvolKit-lora)
* [AXCXEPT/Borea-Phi-3.5-mini-Instruct-Coding](https://huggingface.co/AXCXEPT/Borea-Phi-3.5-mini-Instruct-Coding) + [bunnycore/Phi-3.1-EvolKit-lora](https://huggingface.co/bunnycore/Phi-3.1-EvolKit-lora)
* [ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) + [bunnycore/Phi-3.5-mini-System-lora](https://huggingface.co/bunnycore/Phi-3.5-mini-System-lora)
* [FreedomIntelligence/Apollo2-3.8B](https://huggingface.co/FreedomIntelligence/Apollo2-3.8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1+bunnycore/Phi-3.5-mini-System-lora
parameters:
density: 0.5
weight: 0.5
- model: FreedomIntelligence/Apollo2-3.8B
parameters:
density: 0.5
weight: 0.5
- model: AXCXEPT/Borea-Phi-3.5-mini-Instruct-Coding+bunnycore/Phi-3.1-EvolKit-lora
parameters:
density: 0.5
weight: 0.5
- model: MaziyarPanahi/calme-2.1-phi3.5-4b+bunnycore/Phi-3.1-EvolKit-lora
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: win10/Phi-3.5-mini-instruct-24-9-29
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
BoltMonkey/DreadMix
|
BoltMonkey
| 2024-10-17T00:33:02Z | 9 | 0 | null |
[
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/Eunoia_Vespera-8B-LINEAR",
"DreadPoor/Promissum_Mane-8B-LINEAR-lorablated",
"model-index",
"region:us"
] | null | 2024-10-12T17:28:16Z |
---
tags:
- merge
- mergekit
- lazymergekit
- DreadPoor/Eunoia_Vespera-8B-LINEAR
- DreadPoor/Promissum_Mane-8B-LINEAR-lorablated
base_model:
- DreadPoor/Eunoia_Vespera-8B-LINEAR
- DreadPoor/Promissum_Mane-8B-LINEAR-lorablated
model-index:
- name: DreadMix
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 70.95
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/DreadMix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/DreadMix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 13.75
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/DreadMix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.6
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/DreadMix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.62
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/DreadMix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.0
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/DreadMix
name: Open LLM Leaderboard
---
# DreadMix
DreadMix is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DreadPoor/Eunoia_Vespera-8B-LINEAR](https://huggingface.co/DreadPoor/Eunoia_Vespera-8B-LINEAR)
* [DreadPoor/Promissum_Mane-8B-LINEAR-lorablated](https://huggingface.co/DreadPoor/Promissum_Mane-8B-LINEAR-lorablated)
## 🧩 Configuration
```yaml
models:
- model: DreadPoor/Aurora_faustus-8B-LORABLATED_ALT
- model: DreadPoor/Eunoia_Vespera-8B-LINEAR
parameters:
density: 0.53
weight: 0.55
- model: DreadPoor/Promissum_Mane-8B-LINEAR-lorablated
parameters:
density: 0.53
weight: 0.45
merge_method: dare_ties
base_model: DreadPoor/Aurora_faustus-8B-LORABLATED_ALT
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "BoltMonkey/DreadMix"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BoltMonkey__DreadMix)
| Metric |Value|
|-------------------|----:|
|Avg. |28.46|
|IFEval (0-Shot) |70.95|
|BBH (3-Shot) |34.85|
|MATH Lvl 5 (4-Shot)|13.75|
|GPQA (0-shot) | 6.60|
|MuSR (0-shot) |13.62|
|MMLU-PRO (5-shot) |31.00|
|
gurudatta11/billsum-t5-small
|
gurudatta11
| 2024-10-17T00:32:06Z | 114 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-17T00:16:54Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: billsum-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4770
- Rouge1: 0.1552
- Rouge2: 0.059
- Rougel: 0.1275
- Rougelsum: 0.1275
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.6042 | 0.1399 | 0.0499 | 0.1148 | 0.1145 | 19.0 |
| No log | 2.0 | 124 | 2.5220 | 0.1478 | 0.0538 | 0.1203 | 0.1202 | 19.0 |
| No log | 3.0 | 186 | 2.4874 | 0.1544 | 0.0581 | 0.1266 | 0.1265 | 19.0 |
| No log | 4.0 | 248 | 2.4770 | 0.1552 | 0.059 | 0.1275 | 0.1275 | 19.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/Defetya_-_qwen-4B-saiga-gguf
|
RichardErkhov
| 2024-10-17T00:24:44Z | 10 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:08:57Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen-4B-saiga - GGUF
- Model creator: https://huggingface.co/Defetya/
- Original model: https://huggingface.co/Defetya/qwen-4B-saiga/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen-4B-saiga.Q2_K.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q2_K.gguf) | Q2_K | 1.51GB |
| [qwen-4B-saiga.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.IQ3_XS.gguf) | IQ3_XS | 1.66GB |
| [qwen-4B-saiga.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.IQ3_S.gguf) | IQ3_S | 1.73GB |
| [qwen-4B-saiga.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q3_K_S.gguf) | Q3_K_S | 1.73GB |
| [qwen-4B-saiga.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.IQ3_M.gguf) | IQ3_M | 1.81GB |
| [qwen-4B-saiga.Q3_K.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q3_K.gguf) | Q3_K | 1.89GB |
| [qwen-4B-saiga.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q3_K_M.gguf) | Q3_K_M | 1.89GB |
| [qwen-4B-saiga.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q3_K_L.gguf) | Q3_K_L | 2.03GB |
| [qwen-4B-saiga.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.IQ4_XS.gguf) | IQ4_XS | 2.08GB |
| [qwen-4B-saiga.Q4_0.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q4_0.gguf) | Q4_0 | 2.17GB |
| [qwen-4B-saiga.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.IQ4_NL.gguf) | IQ4_NL | 2.18GB |
| [qwen-4B-saiga.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q4_K_S.gguf) | Q4_K_S | 2.18GB |
| [qwen-4B-saiga.Q4_K.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q4_K.gguf) | Q4_K | 2.29GB |
| [qwen-4B-saiga.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q4_K_M.gguf) | Q4_K_M | 2.29GB |
| [qwen-4B-saiga.Q4_1.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q4_1.gguf) | Q4_1 | 2.38GB |
| [qwen-4B-saiga.Q5_0.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q5_0.gguf) | Q5_0 | 2.58GB |
| [qwen-4B-saiga.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q5_K_S.gguf) | Q5_K_S | 2.58GB |
| [qwen-4B-saiga.Q5_K.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q5_K.gguf) | Q5_K | 2.64GB |
| [qwen-4B-saiga.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q5_K_M.gguf) | Q5_K_M | 2.64GB |
| [qwen-4B-saiga.Q5_1.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q5_1.gguf) | Q5_1 | 2.79GB |
| [qwen-4B-saiga.Q6_K.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q6_K.gguf) | Q6_K | 3.03GB |
| [qwen-4B-saiga.Q8_0.gguf](https://huggingface.co/RichardErkhov/Defetya_-_qwen-4B-saiga-gguf/blob/main/qwen-4B-saiga.Q8_0.gguf) | Q8_0 | 3.92GB |
Original model description:
---
license: apache-2.0
tags:
- Russian
---
Qwen 4B chat by Alibaba, SFTuned on Saiga dataset. Finetuned with EasyDeL framework on v3-8 Google TPU, provided by TRC.
Модель Qwen 4B, дообученая на датасете Ильи Гусева. По моему краткому опыту общения с моделью, лучше чем Saiga-mistral. Не ошибается в падежах. Карточка модели будет дополнена после теста на Russian SuperGlue. Возможно, будет DPO
Чтобы использовать модель, необходимо назначить eos токен как <|im_end|>. Рабочий ноутбук на Kaggle: https://www.kaggle.com/code/defdet/smol-chatbot/notebook
|
RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf
|
RichardErkhov
| 2024-10-17T00:24:37Z | 6 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:34:34Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bahasa-4b-chat - GGUF
- Model creator: https://huggingface.co/Bahasalab/
- Original model: https://huggingface.co/Bahasalab/Bahasa-4b-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Bahasa-4b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q2_K.gguf) | Q2_K | 1.51GB |
| [Bahasa-4b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_XS.gguf) | IQ3_XS | 1.66GB |
| [Bahasa-4b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_S.gguf) | IQ3_S | 1.73GB |
| [Bahasa-4b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_S.gguf) | Q3_K_S | 1.73GB |
| [Bahasa-4b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ3_M.gguf) | IQ3_M | 1.81GB |
| [Bahasa-4b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K.gguf) | Q3_K | 1.89GB |
| [Bahasa-4b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_M.gguf) | Q3_K_M | 1.89GB |
| [Bahasa-4b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q3_K_L.gguf) | Q3_K_L | 2.03GB |
| [Bahasa-4b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ4_XS.gguf) | IQ4_XS | 2.08GB |
| [Bahasa-4b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_0.gguf) | Q4_0 | 2.17GB |
| [Bahasa-4b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.IQ4_NL.gguf) | IQ4_NL | 2.18GB |
| [Bahasa-4b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K_S.gguf) | Q4_K_S | 2.18GB |
| [Bahasa-4b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K.gguf) | Q4_K | 2.29GB |
| [Bahasa-4b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_K_M.gguf) | Q4_K_M | 2.29GB |
| [Bahasa-4b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q4_1.gguf) | Q4_1 | 2.38GB |
| [Bahasa-4b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_0.gguf) | Q5_0 | 2.58GB |
| [Bahasa-4b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K_S.gguf) | Q5_K_S | 2.58GB |
| [Bahasa-4b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K.gguf) | Q5_K | 2.64GB |
| [Bahasa-4b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_K_M.gguf) | Q5_K_M | 2.64GB |
| [Bahasa-4b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q5_1.gguf) | Q5_1 | 2.79GB |
| [Bahasa-4b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q6_K.gguf) | Q6_K | 3.03GB |
| [Bahasa-4b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Bahasalab_-_Bahasa-4b-chat-gguf/blob/main/Bahasa-4b-chat.Q8_0.gguf) | Q8_0 | 3.92GB |
Original model description:
---
language:
- id
license: other
license_name: tongyi-qianwen
---
# Bahasa-4b Model Report
## Model Name
**Bahasa-4b**
## Model Detail
Bahasa-4b is continued training from qwen-4b using 10 billion high quality text of Indonesian. The model outperforms some 4b, and even 7b models for Indonesian tasks.
## Model Developers
Bahasa AI
## Intended Use
This model is intended for various NLP tasks that require understanding and generating Indonesian language. It is suitable for applications such as question answering, sentiment analysis, document summarization, and more.
## Training Data
Bahasa-4b was trained on a 10 billion subset data of Indonesian dataset from a collected pool of 100 billion.
## Benchmarks
The following table shows the performance of Bahasa-4b compared to the models Sailor_4b and Mistral-7B-v0.1 across several benchmarks:
| Dataset | Version | Metric | Mode | Sailor_4b | Bahasa-4b-hf | Mistral-7B-v0.1 |
|----------------|---------|--------|------|-----------|--------------|-----------------|
| tydiqa-id | 0e9309 | EM | gen | 53.98 | 55.04 | 63.54 |
| tydiqa-id | 0e9309 | F1 | gen | 73.48 | 75.39 | 78.73 |
| xcopa-id | 36c11c | EM | ppl | 69.2 | 73.2 | 62.40 |
| xcopa-id | 36c11c | F1 | ppl | 69.2 | 73.2 | - |
| m3exam-id-ppl | ede415 | EM | ppl | 31.27 | 44.47 | 26.68 |
| belebele-id-ppl| 7fe030 | EM | ppl | 41.33 | 42.33 | 41.33 |
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Bahasalab/Bahasa-4b-chat-v2",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Bahasalab/Bahasa-4b-chat")
messages = [
{"role": "system", "content": "Kamu adalah asisten yang membantu"},
{"role": "user", "content": "kamu siapa"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
input_ids=model_inputs.input_ids,
attention_mask=model_inputs.attention_mask,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
This data demonstrates that Bahasa-4b consistently outperforms the Sailor_4b model in various Indonesian language tasks, showing improvements in both EM (Exact Match) and F1 scores across different datasets, and is competitive with the Mistral-7B-v0.1 model.
|
rahul28122002/finetuned_billsum_t5
|
rahul28122002
| 2024-10-17T00:22:38Z | 114 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-17T00:17:47Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: finetuned_billsum_t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_billsum_t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5509
- Rouge1: 0.1423
- Rouge2: 0.0527
- Rougel: 0.1172
- Rougelsum: 0.1173
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8488 | 0.1307 | 0.0404 | 0.1093 | 0.1094 | 19.0 |
| No log | 2.0 | 124 | 2.6309 | 0.138 | 0.0487 | 0.1138 | 0.114 | 19.0 |
| No log | 3.0 | 186 | 2.5677 | 0.1428 | 0.0524 | 0.1167 | 0.1167 | 19.0 |
| No log | 4.0 | 248 | 2.5509 | 0.1423 | 0.0527 | 0.1172 | 0.1173 | 19.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mlx-community/SuperNova-Medius-8bit
|
mlx-community
| 2024-10-17T00:16:52Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"mlx",
"conversational",
"base_model:arcee-ai/SuperNova-Medius",
"base_model:quantized:arcee-ai/SuperNova-Medius",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2024-10-16T22:40:32Z |
---
base_model: arcee-ai/SuperNova-Medius
library_name: transformers
license: apache-2.0
tags:
- mergekit
- merge
- mlx
model-index:
- name: SuperNova-Medius
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.6
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 49.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 32.48
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.9
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.83
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/SuperNova-Medius
name: Open LLM Leaderboard
---
# mlx-community/SuperNova-Medius-8bit
The Model [mlx-community/SuperNova-Medius-8bit](https://huggingface.co/mlx-community/SuperNova-Medius-8bit) was converted to MLX format from [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius) using mlx-lm version **0.19.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/SuperNova-Medius-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
BroAlanTaps/GPT2-large-4-18000steps
|
BroAlanTaps
| 2024-10-17T00:10:10Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T00:08:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spow12/whisper-medium-zeroth_korean
|
spow12
| 2024-10-16T23:58:51Z | 426 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ko",
"dataset:Bingsu/zeroth-korean",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-10T01:02:13Z |
---
license: apache-2.0
datasets:
- Bingsu/zeroth-korean
language:
- ko
metrics:
- cer
- wer
pipeline_tag: automatic-speech-recognition
---
# Whisper-Medium-KsponSpeech
The Whisper-medium Model finetunned with [KsponSpeech](https://huggingface.co/datasets/Murple/ksponspeech)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by :** [yw0nam](https://github.com/yw0nam)
- **Shared by :** [yw0nam](https://github.com/yw0nam)
- **Model type :** ASR
- **License:** [apache-2.0]
## Uses
```
processor = WhisperProcessor.from_pretrained("openai/whisper-medium", language="ko", task="transcribe")
model = WhisperForConditionalGeneration.from_pretrained('spow12/whisper-medium-zeroth_korean').cuda()
data, _ = librosa.load(wav_path, sr=16000)
input_features = processor(data, sampling_rate=16000, return_tensors="pt").input_features.cuda()
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
```
### Metrics
Metric | result |
--- | --- |
WER | 3.96 |
CER | 1.71 |
|
jefersonsehnem/sentiment-classifier
|
jefersonsehnem
| 2024-10-16T23:54:13Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T23:53:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/MN-Lulanum-12B-FIX-i1-GGUF
|
mradermacher
| 2024-10-16T23:42:08Z | 135 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:djuna/MN-Lulanum-12B-FIX",
"base_model:quantized:djuna/MN-Lulanum-12B-FIX",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-16T21:50:49Z |
---
base_model: djuna/MN-Lulanum-12B-FIX
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/djuna/MN-Lulanum-12B-FIX
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF/resolve/main/MN-Lulanum-12B-FIX.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MN-Lulanum-12B-FIX-GGUF
|
mradermacher
| 2024-10-16T23:42:07Z | 11 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:djuna/MN-Lulanum-12B-FIX",
"base_model:quantized:djuna/MN-Lulanum-12B-FIX",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:08:30Z |
---
base_model: djuna/MN-Lulanum-12B-FIX
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/djuna/MN-Lulanum-12B-FIX
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Lulanum-12B-FIX-GGUF/resolve/main/MN-Lulanum-12B-FIX.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf
|
RichardErkhov
| 2024-10-16T23:40:48Z | 33 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T22:23:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Explore_Llama-3.2-1B-Inst_v2 - GGUF
- Model creator: https://huggingface.co/DeepAutoAI/
- Original model: https://huggingface.co/DeepAutoAI/Explore_Llama-3.2-1B-Inst_v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Explore_Llama-3.2-1B-Inst_v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q2_K.gguf) | Q2_K | 0.54GB |
| [Explore_Llama-3.2-1B-Inst_v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [Explore_Llama-3.2-1B-Inst_v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [Explore_Llama-3.2-1B-Inst_v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q3_K.gguf) | Q3_K | 0.64GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [Explore_Llama-3.2-1B-Inst_v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q4_0.gguf) | Q4_0 | 0.72GB |
| [Explore_Llama-3.2-1B-Inst_v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q4_K.gguf) | Q4_K | 0.75GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q4_1.gguf) | Q4_1 | 0.77GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q5_0.gguf) | Q5_0 | 0.83GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q5_K.gguf) | Q5_K | 0.85GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q5_1.gguf) | Q5_1 | 0.89GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q6_K.gguf) | Q6_K | 0.95GB |
| [Explore_Llama-3.2-1B-Inst_v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.2-1B-Inst_v2-gguf/blob/main/Explore_Llama-3.2-1B-Inst_v2.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjwon/ub_mistral7bv3_sft_dpo_beta1e-1_epoch9
|
yjwon
| 2024-10-16T23:35:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T23:32:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf
|
RichardErkhov
| 2024-10-16T23:26:25Z | 17 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T20:00:43Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix - GGUF
- Model creator: https://huggingface.co/SongTonyLi/
- Original model: https://huggingface.co/SongTonyLi/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q2_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q6_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q8_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix-gguf/blob/main/Llama-3.2-3B-Instruct-SFT-D_chosen-pref-mix.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjwon/ub_mistral7bv3_sft_dpo_beta1e-1_epoch6
|
yjwon
| 2024-10-16T23:24:33Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T23:21:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fabdream/80s-Fantasy-Movie
|
fabdream
| 2024-10-16T23:17:53Z | 166 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-10-16T23:17:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
ArsMovieStill, 80s Fantasy Movie Still, The image is a portrait of a young
man wearing a black hooded cloak with a hood that covers his head. He is
holding a crystal ball in his hands and appears to be in the process of
casting it into the air. The man has long dark hair and is looking directly
at the camera with a serious expression on his face. The background is dark
and filled with stars and nebula giving the impression of a mystical and
mystical atmosphere. The colors are vibrant and the overall mood of the
image is intense and powerful., 1girl, solo, hood, blue eyes, black hair,
looking at viewer, hood up, robe, upper body, realistic
output:
url: images/2024-09-24-122121.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ArsMovieStill, 80s Fantasy Movie Still
---
# 80s Fantasy Movie
<Gallery />
## Trigger words
You should use `ArsMovieStill` to trigger the image generation.
You should use `80s Fantasy Movie Still` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/fabdream/80s-Fantasy-Movie/tree/main) them in the Files & versions tab.
|
RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf
|
RichardErkhov
| 2024-10-16T23:14:26Z | 15 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:33:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1 - GGUF
- Model creator: https://huggingface.co/jjaegii/
- Original model: https://huggingface.co/jjaegii/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q2_K.gguf) | Q2_K | 0.54GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K.gguf) | Q3_K | 0.64GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_0.gguf) | Q4_0 | 0.72GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_K.gguf) | Q4_K | 0.75GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q4_1.gguf) | Q4_1 | 0.77GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_0.gguf) | Q5_0 | 0.83GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_K.gguf) | Q5_K | 0.85GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q5_1.gguf) | Q5_1 | 0.89GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q6_K.gguf) | Q6_K | 0.95GB |
| [Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/jjaegii_-_Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1-gguf/blob/main/Llama-3.2-1B-Instruct-LoRA-ko-kubefix-v1.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anusmriti298/llama3_1_lora_sft_DJ
|
anusmriti298
| 2024-10-16T23:11:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T23:05:35Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** anusmriti298
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.