modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
parrottygg/phi3v2 | parrottygg | 2024-11-01T12:15:28Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T12:11:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rfajri/sentiment-indobert-v1 | rfajri | 2024-11-01T12:15:13Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-01T12:14:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/SmolLM2-360M-GGUF | QuantFactory | 2024-11-01T12:09:00Z | 254 | 2 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T12:06:09Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
[](https://hf.co/QuantFactory)
# QuantFactory/SmolLM2-360M-GGUF
This is quantized version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) created using llama.cpp
# Original Model Card
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
```bash
pip install transformers
```
#### Running the model on CPU/GPU/multi GPU
* _Using full precision_
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-360M"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "HuggingFaceTB/SmolLM2-360M"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for fp16 use `torch_dtype=torch.float16` instead
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 723.56 MB
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
|:-------------------|:------------:|:------------:|:------------:|
| HellaSwag | **54.5** | 51.2 | 51.8 |
| ARC (Average) | **53.0** | 45.4 | 50.1 |
| PIQA | **71.7** | 69.9 | 71.6 |
| MMLU (cloze) | **35.8** | 33.7 | 34.4 |
| CommonsenseQA | **38.0** | 31.6 | 35.3 |
| TriviaQA | **16.9** | 4.3 | 9.1 |
| Winogrande | 52.5 | **54.1** | 52.8 |
| OpenBookQA | **37.4** | **37.4** | 37.2 |
| GSM8K (5-shot) | 3.2 | **33.4** | 1.6 |
## Instruction Model
| Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
|:-----------------------------|:---------------------:|:---------------------:|:---------------------:|
| IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 |
| MT-Bench | 3.66 | **4.16** | 3.37 |
| HellaSwag | **52.1** | 48.0 | 47.9 |
| ARC (Average) | **43.7** | 37.3 | 38.8 |
| PIQA | **70.8** | 67.2 | 69.4 |
| MMLU (cloze) | **32.8** | 31.7 | 30.6 |
| BBH (3-shot) | 27.3 | **30.7** | 24.4 |
| GSM8K (5-shot) | 7.43 | **26.8** | 1.36 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 4T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martรญn Blรกzquez and Lewis Tunstall and Agustรญn Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
Hi-Q/krx_qwen_2-7b-it_1101 | Hi-Q | 2024-11-01T12:07:58Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:finetune:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T10:32:03Z | ---
base_model: unsloth/Qwen2-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Hi-Q
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
letuandat/tts-nnng-2410 | letuandat | 2024-11-01T12:04:49Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-31T16:25:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/SmolLM2-360M-Instruct-GGUF | QuantFactory | 2024-11-01T12:03:38Z | 246 | 3 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T12:00:52Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
[](https://hf.co/QuantFactory)
# QuantFactory/SmolLM2-360M-Instruct-GGUF
This is quantized version of [HuggingFaceTB/SmolLM2-360M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) created using llama.cpp
# Original Model Card
# SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
|:-------------------|:------------:|:------------:|:------------:|
| HellaSwag | **54.5** | 51.2 | 51.8 |
| ARC (Average) | **53.0** | 45.4 | 50.1 |
| PIQA | **71.7** | 69.9 | 71.6 |
| MMLU (cloze) | **35.8** | 33.7 | 34.4 |
| CommonsenseQA | **38.0** | 31.6 | 35.3 |
| TriviaQA | **16.9** | 4.3 | 9.1 |
| Winogrande | 52.5 | **54.1** | 52.8 |
| OpenBookQA | **37.4** | **37.4** | 37.2 |
| GSM8K (5-shot) | 3.2 | **33.4** | 1.6 |
## Instruction Model
| Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
|:-----------------------------|:---------------------:|:---------------------:|:---------------------:|
| IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 |
| MT-Bench | 3.66 | **4.16** | 3.37 |
| HellaSwag | **52.1** | 48.0 | 47.9 |
| ARC (Average) | **43.7** | 37.3 | 38.8 |
| PIQA | **70.8** | 67.2 | 69.4 |
| MMLU (cloze) | **32.8** | 31.7 | 30.6 |
| BBH (3-shot) | 27.3 | **30.7** | 24.4 |
| GSM8K (5-shot) | 7.43 | **26.8** | 1.36 |
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 4T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 64 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martรญn Blรกzquez and Lewis Tunstall and Agustรญn Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
johnatanebonilla/w_small_lv_70 | johnatanebonilla | 2024-11-01T12:01:32Z | 85 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-30T03:26:56Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w_small_lv_70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w_small_lv_70
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6468
- Wer: 77.1230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.7247 | 0.7184 | 1000 | 0.6818 | 77.6120 |
| 0.5041 | 1.4368 | 2000 | 0.6395 | 75.4202 |
| 0.3808 | 2.1552 | 3000 | 0.6313 | 85.2857 |
| 0.3595 | 2.8736 | 4000 | 0.6264 | 71.4611 |
| 0.2771 | 3.5920 | 5000 | 0.6468 | 77.1230 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF | mradermacher | 2024-11-01T12:00:06Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7",
"base_model:quantized:AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T11:48:02Z | ---
base_model: AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7
language:
- en
library_name: transformers
license: cc-by-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.7
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q2_K.gguf) | Q2_K | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q3_K_S.gguf) | Q3_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q3_K_M.gguf) | Q3_K_M | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q3_K_L.gguf) | Q3_K_L | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.IQ4_XS.gguf) | IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q4_K_S.gguf) | Q4_K_S | 3.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q4_K_M.gguf) | Q4_K_M | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q5_K_S.gguf) | Q5_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q5_K_M.gguf) | Q5_K_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q6_K.gguf) | Q6_K | 5.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.Q8_0.gguf) | Q8_0 | 6.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AIFT-ko-orca-plat-Yi-ko-6b-v1.7-GGUF/resolve/main/AIFT-ko-orca-plat-Yi-ko-6b-v1.7.f16.gguf) | f16 | 12.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/SmolLM2-1.7B-Instruct-GGUF | QuantFactory | 2024-11-01T11:57:57Z | 52 | 3 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T11:48:52Z |
---
library_name: transformers
license: apache-2.0
language:
- en
---
[](https://hf.co/QuantFactory)
# QuantFactory/SmolLM2-1.7B-Instruct-GGUF
This is quantized version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) created using llama.cpp
# Original Model Card
# SmolLM2

## Table of Contents
1. [Model Summary](#model-summary)
2. [Evaluation](#evaluation)
3. [Examples](#examples)
4. [Limitations](#limitations)
5. [Training](#training)
6. [License](#license)
7. [Citation](#citation)
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
### How to use
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu
```
## Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
## Base Pre-Trained Model
| Metric | SmolLM2-1.7B | Llama-1B | Qwen2.5-1.5B | SmolLM1-1.7B |
|------------------|--------------|-------------|---------------|--------------|
| HellaSwag | **68.7** | 61.2 | 66.4 | 62.9 |
| ARC (Average) | **60.5** | 49.2 | 58.5 | 59.9 |
| PIQA | **77.6** | 74.8 | 76.1 | 76.0 |
| MMLU-Pro (MCF) | **19.4** | 11.7 | 13.7 | 10.8 |
| CommonsenseQA | **43.6** | 41.2 | 34.1 | 38.0 |
| TriviaQA | **36.7** | 28.1 | 20.9 | 22.5 |
| Winogrande | **59.4** | 57.8 | 59.3 | 54.7 |
| OpenBookQA | 42.2 | 38.4 | 40.0 | **42.4** |
| GSM8K (5-shot) | 31.0 | 7.2 | **61.3** | 5.5 |
## Instruction Model
| Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct |
|:-----------------------------|:---------------------:|:-----------------:|:----------------------:|:----------------------:|
| IFEval (Average prompt/inst) | **56.7** | 53.5 | 47.4 | 23.1 |
| MT-Bench | 6.13 | 5.48 | **6.52** | 4.33 |
| OpenRewrite-Eval (micro_avg RougeL) | 44.9 | 39.2 | **46.9** | NaN |
| HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 |
| ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 |
| PIQA | **74.4** | 72.3 | 73.2 | 71.6 |
| MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 |
| BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 |
| GSM8K (5-shot) | **48.2** | 26.8 | 42.8 | 4.62 |
## Examples
Below are some system and instruct prompts that work well for special tasks
### Text rewriting
```python
system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message."
user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:"
messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!}"]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
```
Hey there! I noticed that the CI isn't passing after your latest commit. Could you take a look and let me know what's going on? Thanks so much for your help!
```
### Summarization
```python
system_prompt_summarize = "Provide a concise, objective summary of the input text in up to three sentences, focusing on key actions and intentions without using second or third person pronouns."
messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content": INSERT_LONG_EMAIL]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Function calling
SmolLM2-1.7B-Instruct can handle function calling, it scores 27% on the [BFCL Leaderboard](https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html). Here's how you can leverage it:
```python
import json
import re
from typing import Optional
from jinja2 import Template
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.utils import get_json_schema
system_prompt = Template("""You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the functions can be used, point it out and refuse to answer.
If the given question lacks the parameters required by the function, also point it out.
You have access to the following tools:
<tools>{{ tools }}</tools>
The output MUST strictly adhere to the following format, and NO other text MUST be included.
The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make the tool calls an empty list '[]'.
<tool_call>[
{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},
... (more tool calls as required)
]</tool_call>""")
def prepare_messages(
query: str,
tools: Optional[dict[str, any]] = None,
history: Optional[list[dict[str, str]]] = None
) -> list[dict[str, str]]:
"""Prepare the system and user messages for the given query and tools.
Args:
query: The query to be answered.
tools: The tools available to the user. Defaults to None, in which case if a
list without content will be passed to the model.
history: Exchange of messages, including the system_prompt from
the first query. Defaults to None, the first message in a conversation.
"""
if tools is None:
tools = []
if history:
messages = history.copy()
messages.append({"role": "user", "content": query})
else:
messages = [
{"role": "system", "content": system_prompt.render(tools=json.dumps(tools))},
{"role": "user", "content": query}
]
return messages
def parse_response(text: str) -> str | dict[str, any]:
"""Parses a response from the model, returning either the
parsed list with the tool calls parsed, or the
model thought or response if couldn't generate one.
Args:
text: Response from the model.
"""
pattern = r"<tool_call>(.*?)</tool_call>"
matches = re.findall(pattern, text, re.DOTALL)
if matches:
return json.loads(matches[0])
return text
```
## Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
## Training
### Model
- **Architecture:** Transformer decoder
- **Pretraining tokens:** 11T
- **Precision:** bfloat16
### Hardware
- **GPUs:** 256 H100
### Software
- **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
- **Alignement Handbook** [alignement-handbook](https://github.com/huggingface/alignment-handbook/)
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bash
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martรญn Blรกzquez and Lewis Tunstall and Agustรญn Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
```
|
THU-KEG/Llama3-Crab-DPO | THU-KEG | 2024-11-01T11:49:36Z | 7 | 2 | null | [
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2410.24175",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-11-01T08:24:48Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
---
# Model Card for Llama3-Crab-DPO
<!-- Provide a quick summary of what the model is/does. -->
<p align="justify">
Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated by feeding complex instructions to advanced LLMs. However, even advanced LLMs cannot follow complex instructions well, thus limiting the quality of generated data. In this work, we find that <b><i>existing datasets inherently contain implicit complex constraints</i></b> and propose a novel data generation technique, <b><i>constraint back-translation</i></b>. Specifically, we take the high-quality instruction-response pairs in existing datasets and only adopt advanced LLMs to add complex constraints already met by the responses to the instructions, which naturally reduces costs and data noise. In the experiments, we adopt Llama3-70B-Instruct to back-translate constraints and create a high-quality complex instruction-response dataset, named <b>CRAB</b>. We present that post-training on <font face="Verdana">CRAB</font> improves multiple backbone LLMs' complex instruction-following ability, evaluated on extensive instruction-following benchmarks. We further find that constraint back-translation also serves as a useful auxiliary training objective in post-training.
- ๐ Paper: [Constraint Back-translation Improves Complex Instruction Following of Large Language Models](https://arxiv.org/abs/2410.24175)
</p>
- ๐ฆ Github: [THU/Crab](https://github.com/THU-KEG/Crab)
### Model Performance
| Models | BaseModel | IFEval | FollowBench(HSR) | | | AVG |
|--------------------|-----------|--------|------------------|-------|------|------|
| | | AVG | L1-L2 | L3-L5 | AVG | |
| GPT-3.5-turbo | GPT | 66.3 | 74.2 | 61 | 66.2 | 66.3 |
| GPT-4 | GPT | 81.3 | 80.4 | 69.4 | 73.8 | 77.6 |
| Vicuna-13b-V1.5 | Llama2 | 50.3 | 66.3 | 39.8 | 50.4 | 50.4 |
| WizardLM-13B-V1.2 | Llama2 | 51.4 | 56.5 | 36.9 | 44.7 | 48 |
| Conifer-13B | Llama2 | 50.2 | 57.1 | 40.3 | 47 | 48.6 |
| Zephyr-7B-beta | Mistral | 45.4 | 54.8 | 38.2 | 44.8 | 45.1 |
| Conifer-7B | Mistral | 53.9 | 51.9 | 40.2 | 44.9 | 49.4 |
| Conifer-7B-DPO | Mistral | 55.7 | 57 | 45.4 | 50 | 52.9 |
| Llama3 8B | Llama3 | 31.4 | 6.8 | 8.2 | 7.6 | 19.5 |
| Llama3-crab | Llama3 | 46.9 | 51.2 | 26.7 | 36.5 | 41.7 |
| Llama3-crab + DPO | Llama3 | 49.7 | 56.8 | 38.1 | 45.5 | 47.6 |
| Mistral 7B | Mistral | 25.2 | 15.5 | 6.5 | 10.1 | 17.7 |
| Mistral-crab | Mistral | 54.5 | 59.2 | 32.8 | 43.3 | 48.9 |
| Mistral-crab + DPO | Mistral | 59.4 | 59.9 | 42.5 | 49.4 | 54.4 |
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** Llama3-8B
|
parrottygg/phi3v1 | parrottygg | 2024-11-01T11:48:13Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T11:39:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rahulvk007/ExtractQueNumber | rahulvk007 | 2024-11-01T11:33:45Z | 142 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/SmolLM2-360M",
"base_model:finetune:unsloth/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T11:33:28Z | ---
base_model: unsloth/SmolLM2-360M
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** rahulvk007
- **License:** apache-2.0
- **Finetuned from model :** unsloth/SmolLM2-360M
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
razhan/trocr-base-ckb | razhan | 2024-11-01T11:14:12Z | 66 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-04-01T11:35:44Z | # Kurdish OCR
Transformer based ocr trained on synthetic Central Kurdish Data |
Ariffiq99/Randomized_Roberta_Stacked_model_80 | Ariffiq99 | 2024-11-01T11:14:00Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-01T09:10:23Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Randomized_Roberta_Stacked_model_80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Randomized_Roberta_Stacked_model_80
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8535
- F1: 0.7395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.64 | 1.0 | 1261 | 0.7758 | 0.7327 |
| 0.5704 | 2.0 | 2522 | 0.7685 | 0.7408 |
| 0.5059 | 3.0 | 3783 | 0.8209 | 0.7401 |
| 0.4519 | 4.0 | 5044 | 0.8222 | 0.7381 |
| 0.4177 | 5.0 | 6305 | 0.8535 | 0.7395 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
chendelong/DirectSAM-b0-1024px-sa1b-2ep-dsa-50ep-1101 | chendelong | 2024-11-01T11:06:34Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T11:06:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/OH_original_wo_gpteacher | mlfoundations-dev | 2024-11-01T10:59:02Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T06:12:40Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: OH_original_wo_gpteacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OH_original_wo_gpteacher
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the mlfoundations-dev/OH_original_wo_gpteacher dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6194 | 1.0 | 334 | 0.6101 |
| 0.5614 | 2.0 | 668 | 0.6015 |
| 0.51 | 3.0 | 1002 | 0.6055 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.21.0
- Tokenizers 0.20.1
|
hyobi18220/jam_krx_qwen2.5_v7 | hyobi18220 | 2024-11-01T10:41:01Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"krx",
"en",
"ko",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2024-11-01T10:20:20Z | ---
language:
- en
- ko
base_model:
- unsloth/Qwen2.5-7B-Instruct
tags:
- krx
--- |
shastraai/Shastra-LLAMA2-Math-Commonsense-SLERP | shastraai | 2024-11-01T10:23:48Z | 5 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"shastraai/Shastra-LLAMA-Math-DPO",
"shastraai/Shastra-LLAMA2-Commonsense-SFT",
"base_model:shastraai/Shastra-LLAMA-Math-DPO",
"base_model:merge:shastraai/Shastra-LLAMA-Math-DPO",
"base_model:shastraai/Shastra-LLAMA2-Commonsense-SFT",
"base_model:merge:shastraai/Shastra-LLAMA2-Commonsense-SFT",
"region:us"
] | null | 2024-11-01T10:20:25Z | ---
base_model:
- shastraai/Shastra-LLAMA-Math-DPO
- shastraai/Shastra-LLAMA2-Commonsense-SFT
tags:
- merge
- mergekit
- lazymergekit
- shastraai/Shastra-LLAMA-Math-DPO
- shastraai/Shastra-LLAMA2-Commonsense-SFT
---
# Shastra-LLAMA2-Math-Commonsense
Shastra-LLAMA2-Math-Commonsense is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [shastraai/Shastra-LLAMA-Math-DPO](https://huggingface.co/shastraai/Shastra-LLAMA-Math-DPO)
* [shastraai/Shastra-LLAMA2-Commonsense-SFT](https://huggingface.co/shastraai/Shastra-LLAMA2-Commonsense-SFT)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: shastraai/Shastra-LLAMA-Math-DPO
layer_range: [0, 32]
- model: shastraai/Shastra-LLAMA2-Commonsense-SFT
layer_range: [0, 32]
merge_method: slerp
base_model: shastraai/Shastra-LLAMA-Math-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shastraai/Shastra-LLAMA2-Math-Commonsense"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
reemali811/nucleotide-transformer-finetuned-NucleotideTransformer | reemali811 | 2024-11-01T10:17:25Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"esm",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-01T10:15:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/OH_original_wo_evol_instruct_140k | mlfoundations-dev | 2024-11-01T10:13:37Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T05:52:23Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: OH_original_wo_evol_instruct_140k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OH_original_wo_evol_instruct_140k
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the mlfoundations-dev/OH_original_wo_evol_instruct_140k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6185 | 0.9976 | 307 | 0.6178 |
| 0.5652 | 1.9984 | 615 | 0.6080 |
| 0.5197 | 2.9927 | 921 | 0.6121 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.21.0
- Tokenizers 0.20.1
|
sophiebui/en-ru_mtmodel_v1 | sophiebui | 2024-11-01T10:13:23Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:sophiebui/en-ru_mtmodel",
"base_model:finetune:sophiebui/en-ru_mtmodel",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-01T09:49:27Z | ---
library_name: transformers
license: mit
base_model: sophiebui/en-ru_mtmodel
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: en-ru_mtmodel_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-ru_mtmodel_v1
This model is a fine-tuned version of [sophiebui/en-ru_mtmodel](https://huggingface.co/sophiebui/en-ru_mtmodel) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8443
- Bleu: 44.9157
- Gen Len: 32.0811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 226 | 0.9394 | 37.9005 | 31.5405 |
| No log | 2.0 | 452 | 0.8537 | 43.6072 | 32.3514 |
| 0.935 | 3.0 | 678 | 0.8400 | 46.3652 | 31.8108 |
| 0.935 | 4.0 | 904 | 0.8482 | 44.6002 | 31.973 |
| 0.4432 | 5.0 | 1130 | 0.8443 | 44.9157 | 32.0811 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
coastalcph/CLIPDetail-8311682 | coastalcph | 2024-11-01T10:10:52Z | 148 | 0 | transformers | [
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2024-11-01T10:10:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tuanpasg/Puffin-Qwen2.5-CodeMath-1 | tuanpasg | 2024-11-01T09:53:53Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B",
"base_model:merge:Qwen/Qwen2.5-Coder-1.5B",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:merge:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T09:52:35Z | ---
base_model:
- Qwen/Qwen2.5-Coder-1.5B
- Qwen/Qwen2.5-Math-1.5B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B)
* [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-Coder-1.5B
- model: Qwen/Qwen2.5-Math-1.5B
merge_method: slerp
base_model: Qwen/Qwen2.5-Coder-1.5B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
tuanpasg/Puffin-Qwen2.5-CodeMath | tuanpasg | 2024-11-01T09:39:44Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B",
"base_model:merge:Qwen/Qwen2.5-Coder-1.5B",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:merge:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T09:38:25Z | ---
base_model:
- Qwen/Qwen2.5-Coder-1.5B
- Qwen/Qwen2.5-Math-1.5B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B)
* [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-Math-1.5B
- model: Qwen/Qwen2.5-Coder-1.5B
merge_method: slerp
base_model: Qwen/Qwen2.5-Math-1.5B
dtype: bfloat16
parameters:
t: 0.5
```
|
nuxper/DrBERT-7GB-finetuned-loinc | nuxper | 2024-11-01T09:35:58Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:Dr-BERT/DrBERT-7GB",
"base_model:finetune:Dr-BERT/DrBERT-7GB",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-01T09:14:00Z | ---
library_name: transformers
license: apache-2.0
base_model: Dr-BERT/DrBERT-7GB
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: DrBERT-7GB-finetuned-loinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DrBERT-7GB-finetuned-loinc
This model is a fine-tuned version of [Dr-BERT/DrBERT-7GB](https://huggingface.co/Dr-BERT/DrBERT-7GB) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6762
- Accuracy: 0.8519
- F1: 0.8516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1268 | 0.6636 | 0.8410 | 0.8379 |
| No log | 2.0 | 2536 | 0.6715 | 0.8401 | 0.8414 |
| No log | 3.0 | 3804 | 0.6953 | 0.8538 | 0.8490 |
| No log | 4.0 | 5072 | 0.6719 | 0.8522 | 0.8524 |
| No log | 5.0 | 6340 | 0.6762 | 0.8519 | 0.8516 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.1+cxx11.abi
- Datasets 3.0.1
- Tokenizers 0.20.0
|
mradermacher/Emot5-large-GGUF | mradermacher | 2024-11-01T09:35:27Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:lzw1008/Emot5-large",
"base_model:quantized:lzw1008/Emot5-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T09:31:08Z | ---
base_model: lzw1008/Emot5-large
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lzw1008/Emot5-large
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q5_K_M.gguf) | Q5_K_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q6_K.gguf) | Q6_K | 0.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Emot5-large-GGUF/resolve/main/Emot5-large.f16.gguf) | f16 | 1.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VTSNLP/trans_model_vi_en | VTSNLP | 2024-11-01T09:30:58Z | 5 | 1 | null | [
"tensorboard",
"safetensors",
"t5",
"generated_from_trainer",
"base_model:VietAI/envit5-translation",
"base_model:finetune:VietAI/envit5-translation",
"license:openrail",
"region:us"
] | null | 2024-11-01T09:30:13Z | ---
license: openrail
base_model: VietAI/envit5-translation
tags:
- generated_from_trainer
model-index:
- name: trans_model_vi_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trans_model_vi_en
This model is a fine-tuned version of [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
GeneZC/MiniMA-2-3B | GeneZC | 2024-11-01T09:22:35Z | 1,760 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"dataset:EleutherAI/pile",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:p208p2002/wudao",
"arxiv:2311.07052",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-27T03:36:23Z | ---
language:
- en
- zh
license: apache-2.0
library_name: transformers
datasets:
- EleutherAI/pile
- togethercomputer/RedPajama-Data-1T
- p208p2002/wudao
widget:
- text: <s> 4 + 3 =
model-index:
- name: MiniMA-2-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 44.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 69.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.44
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 8.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B
name: Open LLM Leaderboard
---
## MiniMA-2-3B
๐ [arXiv](https://arxiv.org/abs/2311.07052) | ๐ป [GitHub](https://github.com/GeneZC/MiniMA) | ๐ค [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | ๐ค [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | ๐ค [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | ๐ค [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | ๐ค [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | ๐ค [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | ๐ค [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B)
๐ **Updates from MiniMA-3B**:
- continued from MiniMA-3B without distillation;
- better data mixture;
- more trained tokens.
โ Must comply with LICENSE of LLaMA-2 since it is derived from LLaMA-2.
A language model continued from MiniMA-3B.
Completing the compute-performance pareto frontier together with MiniMA-3B and other arts.
<img src="./teaser_a.jpg" alt="teaser_a" width="700" />
**Standard Benchmarks**
|Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)|
|--|--|--|--|--|--|--|--|
|Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49|
|ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56|
|BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55|
|StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99|
|Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97|
|Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42|
|LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10|
||
|MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11|
|MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72|
|MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87|
|MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
The following is an example code snippet to use MiniMA-2-3B:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# MiniMA
tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniMA-2-3B", use_fast=False)
# GPU.
model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval()
# CPU.
# model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval()
prompt = "Question: Sherrie tells the truth. Vernell says Sherrie tells the truth. Alexis says Vernell lies. Michaela says Alexis tells the truth. Elanor says Michaela tells the truth. Does Elanor tell the truth?\nAnswer: No\n\nQuestion: Kristian lies. Sherrie says Kristian lies. Delbert says Sherrie lies. Jerry says Delbert tells the truth. Shalonda says Jerry tells the truth. Does Shalonda tell the truth?\nAnswer: No\n\nQuestion: Vina tells the truth. Helene says Vina lies. Kandi says Helene tells the truth. Jamey says Kandi lies. Ka says Jamey lies. Does Ka tell the truth?\nAnswer: No\n\nQuestion: Christie tells the truth. Ka says Christie tells the truth. Delbert says Ka lies. Leda says Delbert tells the truth. Lorine says Leda tells the truth. Does Lorine tell the truth?\nAnswer:"
input_ids = tokenizer([prompt]).input_ids
output_ids = model.generate(
torch.as_tensor(input_ids).cuda(),
do_sample=True,
temperature=0.7,
max_new_tokens=1024,
)
output_ids = output_ids[0][len(input_ids[0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
# output: "No"
```
## Bibtex
```bibtex
@article{zhang2023law,
title={Towards the Law of Capacity Gap in Distilling Language Models},
author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
year={2023},
url={https://arxiv.org/abs/2311.07052}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniMA-2-3B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.75|
|AI2 Reasoning Challenge (25-Shot)|44.71|
|HellaSwag (10-Shot) |69.33|
|MMLU (5-Shot) |41.22|
|TruthfulQA (0-shot) |38.44|
|Winogrande (5-shot) |66.69|
|GSM8k (5-shot) | 8.11|
|
minhdang/gte-base-law-matryoshka | minhdang | 2024-11-01T09:20:11Z | 5 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:107510",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-01T09:19:51Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:107510
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-multilingual-base
widget:
- source_sentence: '[''Chแป ฤแปnh thแบงu\n1. Chแป ฤแปnh thแบงu ฤฦฐแปฃc รกp dแปฅng trong cรกc trฦฐแปng
hแปฃp sau ฤรขy:\na) Gรณi thแบงu cแบฅp bรกch cแบงn triแปn khai nhแบฑm mแปฅc tiรชu bแบฃo vแป chแปง quyแปn,
an ninh quแปc gia; gรณi thแบงu cแบงn thแปฑc hiแปn ฤแป khแบฏc phแปฅc ngay hoแบทc ฤแป xแปญ lรฝ kแปp thแปi
hแบญu quแบฃ gรขy ra do thiรชn tai, hแปa hoแบกn, tai nแบกn bแบฅt ngแป, sแปฑ cแป, thแบฃm hแปa hoแบทc sแปฑ
kiแปn bแบฅt khแบฃ khรกng khรกc;\nb) Gรณi thแบงu cung cแบฅp dแปch vแปฅ tฦฐ vแบฅn, phi tฦฐ vแบฅn, hร ng
hรณa, xรขy lแบฏp cแบงn triแปn khai ngay ฤแป trรกnh gรขy nguy hแบกi ฤแบฟn tรญnh mแบกng vร tร i sแบฃn
cแปงa cแปng ฤแปng dรขn cฦฐ trรชn ฤแปa bร n hoแบทc ฤแป khรดng แบฃnh hฦฐแปng nghiรชm trแปng ฤแบฟn cรดng
trรฌnh liแปn kแป;\nc) Gรณi thแบงu cung cแบฅp dแปch vแปฅ tฦฐ vแบฅn, phi tฦฐ vแบฅn, thuแปc, hรณa chแบฅt,
vแบญt tฦฐ xรฉt nghiแปm, thiแบฟt bแป y tแบฟ, linh kiแปn, phแปฅ kiแปn, phฦฐฦกng tiแปn, xรขy lแบฏp cแบงn
triแปn khai ngay ฤแป phแปฅc vแปฅ cรดng tรกc phรฒng, chแปng dแปch bแปnh hoแบทc duy trรฌ hoแบกt ฤแปng
cแปงa cฦก sแป khรกm bแปnh, chแปฏa bแปnh trong trฦฐแปng hแปฃp cแบฅp bรกch, trรกnh gรขy nguy hแบกi ฤแบฟn
tรญnh mแบกng, sแปฉc khแปe ngฦฐแปi dรขn; gรณi thแบงu mua thuแปc, hรณa chแบฅt, vแบญt tฦฐ xรฉt nghiแปm,
thiแบฟt bแป y tแบฟ, linh kiแปn, phแปฅ kiแปn ฤแป cแบฅp cแปฉu ngฦฐแปi bแปnh trong tรฌnh trแบกng cแบฅp
cแปฉu theo quy ฤแปnh cแปงa Luแบญt Khรกm bแปnh, chแปฏa bแปnh trong trฦฐแปng hแปฃp cฦก sแป khรกm bแปnh,
chแปฏa bแปnh khรดng cรณ ฤแปง thuแปc, hรณa chแบฅt, vแบญt tฦฐ xรฉt nghiแปm, thiแบฟt bแป y tแบฟ, linh
kiแปn, phแปฅ kiแปn; gรณi thแบงu mua thuแปc, thiแบฟt bแป y tแบฟ chแป cรณ duy nhแบฅt mแปt hรฃng sแบฃn
xuแบฅt trรชn thแป trฦฐแปng;\nd) Gรณi thแบงu cแบงn thแปฑc hiแปn ฤแป bแบฃo vแป bรญ mแบญt nhร nฦฐแปc;\n...'']'
sentences:
- Trong trฦฐแปng hแปฃp nร o thรฌ ngรขn sรกch trung ฦฐฦกng ฤฦฐแปฃc gia hแบกn khoแบฃn vay ngรขn quแปน
nhร nฦฐแปc?
- Hร nh vi trรฌnh diแป
n khiรชu dรขm trong cแบฅu thร nh tแปi sแปญ dแปฅng ngฦฐแปi dฦฐแปi 16 tuแปi vร o
mแปฅc ฤรญch khiรชu dรขm lร gรฌ?
- Cho phรฉp chแป ฤแปnh thแบงu ฤแป mua thuแปc, thiแบฟt bแป y tแบฟ trong trฦฐแปng hแปฃp khแบฉn cแบฅp?
- source_sentence: "['\"1. Cuแปi mแปi hแปc kแปณ chรญnh, sinh viรชn ฤฦฐแปฃc cแบฃnh bรกo hแปc tแบญp\
\ dแปฑa trรชn mแปt sแป ฤiแปu kiแปn nhฦฐ sau:\\na) Tแปng sแป tรญn chแป khรดng ฤแบกt trong hแปc\
\ kแปณ vฦฐแปฃt quรก 50% khแปi lฦฐแปฃng ฤรฃ ฤฤng kรญ hแปc trong hแปc kแปณ, hoแบทc tแปng sแป tรญn chแป\
\ nแปฃ ฤแปng tแปซ ฤแบงu khรณa hแปc vฦฐแปฃt quรก 24;\\nb) ฤiแปm trung bรฌnh hแปc kแปณ ฤแบกt dฦฐแปi 0,8\
\ ฤแปi vแปi hแปc kแปณ ฤแบงu cแปงa khรณa hแปc, dฦฐแปi 1,0 ฤแปi vแปi cรกc hแปc kแปณ tiแบฟp theo;\\nc)\
\ ฤiแปm trung bรฌnh tรญch lลฉy ฤแบกt dฦฐแปi 1,2 ฤแปi vแปi sinh viรชn trรฌnh ฤแป nฤm thแปฉ nhแบฅt,\
\ dฦฐแปi 1,4 ฤแปi vแปi sinh viรชn trรฌnh ฤแป nฤm thแปฉ hai, dฦฐแปi 1,6 ฤแปi vแปi sinh viรชn\
\ trรฌnh ฤแป nฤm thแปฉ ba dฦฐแปi 1,8 ฤแปi vแปi sinh viรชn cรกc nฤm tiแบฟp theo.\\n2. Sinh\
\ viรชn bแป buแปc thรดi hแปc trong cรกc trฦฐแปng hแปฃp sau:\\na) Sแป lแบงn cแบฃnh bรกo hแปc tแบญp\
\ hoแบทc mแปฉc cแบฃnh bรกo hแปc tแบญp vฦฐแปฃt quรก giแปi hแบกn theo quy ฤแปnh cแปงa cฦก sแป ฤร o tแบกo;\\\
nb) Thแปi gian hแปc tแบญp vฦฐแปฃt quรก giแปi hแบกn theo quy ฤแปnh tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy\
\ chแบฟ nร y.\\n3. Quy chแบฟ cแปงa cฦก sแป ฤร o tแบกo quy ฤแปnh cแปฅ thแป:\\na) Viแปc lแปฑa chแปn\
\ รกp dแปฅng mแปt sแป ฤiแปu kiแปn cแบฃnh bรกo hแปc tแบญp, giแปi hแบกn sแป lแบงn hoแบทc mแปฉc cแบฃnh bรกo\
\ hแปc tแบญp nhฦฐng khรดng vฦฐแปฃt quรก 2 lแบงn cแบฃnh bรกo liรชn tiแบฟp;\\nb) Quy trรฌnh, thแปง tแปฅc\
\ cแบฃnh bรกo hแปc tแบญp, buแปc thรดi hแปc; viแปc thรดng bรกo hรฌnh thแปฉc รกp dแปฅng tแปi sinh viรชn;\\\
nc) Viแปc bแบฃo lฦฐu kแบฟt quแบฃ hแปc tแบญp ฤรฃ tรญch luแปน trong trฦฐแปng hแปฃp sinh viรชn bแป buแปc\
\ thรดi hแปc.\"'\n '\"1. Cuแปi mแปi nฤm hแปc, sinh viรชn ฤฦฐแปฃc ฤรกnh giรก ฤแบกt tiแบฟn ฤแป hแปc\
\ tแบญp bรฌnh thฦฐแปng vร ฤฦฐแปฃc hแปc tiแบฟp lรชn nฤm hแปc sau nแบฟu ฤแบกt cแบฃ hai ฤiแปu kiแปn sau:\\\
na) ฤiแปm trung bรฌnh nฤm hแปc ฤแบกt tแปซ 1,0 trแป lรชn ฤแปi vแปi nฤm hแปc thแปฉ nhแบฅt, tแปซ 1,2\
\ trแป lรชn ฤแปi vแปi nฤm thแปฉ hai vร tแปซ 1,4 ฤแปi vแปi nฤm thแปฉ ba trแป ฤi;\\nb) Sแป tรญn\
\ chแป nแปฃ ฤแปng tแปซ ฤแบงu khรณa khรดng vฦฐแปฃt quรก 16.\\n2. Sinh viรชn bแป buแปc thรดi hแปc trong\
\ cรกc trฦฐแปng hแปฃp sau:\\na) ฤiแปm trung bรฌnh nฤm hแปc ฤแบกt dฦฐแปi 0,8;\\nb) ฤiแปm trung\
\ bรฌnh tรญch lลฉy ฤแบกt dฦฐแปi 1,2 sau 2 nฤm hแปc, dฦฐแปi 1,4 sau 3 nฤm hแปc vร dฦฐแปi 1,6\
\ tแปซ sau 4 nฤm hแปc trแป ฤi;\\nc) Thแปi gian hแปc tแบญp vฦฐแปฃt quรก giแปi hแบกn theo quy ฤแปnh\
\ tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy chแบฟ nร y.\\n3. Sinh viรชn khรดng thuแปc diแปn quy ฤแปnh\
\ tแบกi khoแบฃn 1 vร khoแบฃn 2 ฤiแปu nร y ฤฦฐแปฃc xแบฟp lแปp hแปc cรนng khoรก sau ฤแป cแบฃi thiแปn\
\ kแบฟt quแบฃ hแปc tแบญp.\\n4. Quy chแบฟ cแปงa cฦก sแป ฤร o tแบกo quy ฤแปnh cแปฅ thแป:\\na) Viแปc lแปฑa\
\ chแปn รกp dแปฅng mแปt sแป ฤiแปu kiแปn cแบฃnh bรกo hแปc tแบญp tฦฐฦกng tแปฑ quy ฤแปnh ฤแปi vแปi ฤร o\
\ tแบกo theo tรญn chแป tแบกi khoแบฃn 1 ฤiแปu 11 cแปงa Quy chแบฟ nร y;\\nb) Quy trรฌnh, thแปง tแปฅc\
\ cแบฃnh bรกo hแปc tแบญp (nแบฟu cรณ), buแปc thรดi hแปc; viแปc thรดng bรกo hรฌnh thแปฉc รกp dแปฅng tแปi\
\ sinh viรชn;\\nc) Viแปc bแบฃo lฦฐu kแบฟt quแบฃ hแปc tแบญp ฤรฃ tรญch luแปน trong trฦฐแปng hแปฃp sinh\
\ viรชn bแป buแปc thรดi hแปc.\"']"
sentences:
- Ngฦฐแปi lao ฤแปng cรณ thแปi gian tham gia bแบฃo hiแปm xรฃ hแปi bแบฏt buแปc mร tแปฑ tแปญ cรณ ฤฦฐแปฃc
hฦฐแปng trแปฃ cแบฅp mai tรกng khรดng?
- Giแบฅy chแปฉng nhแบญn sแปญ dแปฅng cรดng cแปฅ hแป trแปฃ bแป mแบฅt thรฌ trรฌnh tแปฑ, thแปง tแปฅc ฤแป nghแป cแบฅp
lแบกi ฤฦฐแปฃc thแปฑc hiแปn nhฦฐ thแบฟ nร o?
- Xแปญ lรฝ kแบฟt quแบฃ hแปc tแบญp theo tรญn chแป vร niรชn chแบฟ ฤฦฐแปฃc quy ฤแปnh nhฦฐ thแบฟ nร o?
- source_sentence: '[''Chuyแปn ngร nh, chuyแปn nฦกi hแปc, chuyแปn cฦก sแป ฤร o tแบกo, chuyแปn
hรฌnh thแปฉc hแปc\n1. Sinh viรชn ฤฦฐแปฃc xem xรฉt chuyแปn sang hแปc mแปt chฦฐฦกng trรฌnh, mแปt
ngร nh ฤร o tแบกo khรกc, hoแบทc mแปt phรขn hiแปu khรกc cแปงa cฦก sแป ฤร o tแบกo, hoแบทc tแปซ phรขn hiแปu
vแป trแปฅ sแป chรญnh khi cรณ ฤแปง cรกc ฤiแปu kiแปn sau:\na) Khรดng ฤang lร sinh viรชn trรฌnh
ฤแป nฤm thแปฉ nhแบฅt hoแบทc nฤm cuแปi khรณa, khรดng thuแปc diแปn bแป xem xรฉt buแปc thรดi hแปc
vร cรฒn ฤแปง thแปi gian hแปc tแบญp theo quy ฤแปnh tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy chแบฟ nร y;\nb)
Sinh viรชn ฤแบกt ฤiแปu kiแปn trรบng tuyแปn cแปงa chฦฐฦกng trรฌnh, ngร nh ฤร o tแบกo, cแปงa trแปฅ sแป
chรญnh (hoแบทc phรขn hiแปu ) trong cรนng khรณa tuyแปn sinh;\nc) Cฦก sแป ฤร o tแบกo, trแปฅ sแป
chรญnh (hoแบทc phรขn hiแปu) cรณ ฤแปง cรกc ฤiแปu kiแปn bแบฃo ฤแบฃm chแบฅt lฦฐแปฃng, chฦฐa vฦฐแปฃt quรก nฤng
lแปฑc ฤร o tแบกo ฤแปi vแปi chฦฐฦกng trรฌnh, ngร nh ฤร o tแบกo ฤรณ theo quy ฤแปnh hiแปn hร nh cแปงa
Bแป Giรกo dแปฅc vร ฤร o tแบกo;\nd) ฤฦฐแปฃc sแปฑ ฤแปng รฝ cแปงa thแปง trฦฐแปng cรกc ฤฦกn vแป chuyรชn mรดn
phแปฅ trรกch chฦฐฦกng trรฌnh, ngร nh ฤร o tแบกo, ngฦฐแปi phแปฅ trรกch phรขn hiแปu (nฦกi chuyแปn ฤi
vร chuyแบฟn ฤแบฟn) vร cแปงa hiแปu trฦฐแปng cฦก sแป ฤร o tแบกo.\n2. Sinh viรชn ฤฦฐแปฃc xem xรฉt chuyแปn
cฦก sแป ฤร o tแบกo khi cรณ ฤแปง cรกc ฤiแปu kiแปn sau:\na) Khรดng ฤang lร sinh viรชn trรฌnh ฤแป
nฤm thแปฉ nhแบฅt hoแบทc nฤm cuแปi khรณa, khรดng thuแปc diแปn bแป xem xรฉt buแปc thรดi hแปc vร
cรฒn ฤแปง thแปi gian hแปc tแบญp theo quy ฤแปnh tแบกi khoแบฃn 5 ฤiแปu 2 cแปงa Quy chแบฟ nร y;\nb)
Sinh viรชn ฤแบกt ฤiแปu kiแปn trรบng tuyแปn cแปงa chฦฐฦกng trรฌnh, ngร nh ฤร o tแบกo cรนng khรณa
tuyแปn sinh tแบกi nฦกi chuyแปn ฤแบฟn;\nc) Nฦกi chuyแปn ฤแบฟn cรณ ฤแปง cรกc ฤiแปu kiแปn bแบฃo ฤแบฃm
chแบฅt lฦฐแปฃng, chฦฐa vฦฐแปฃt quรก nฤng lแปฑc ฤร o tแบกo ฤแปi vแปi chฦฐฦกng trรฌnh, ngร nh ฤร o tแบกo
ฤรณ theo quy ฤแปnh hiแปn hร nh cแปงa Bแป Giรกo dแปฅc vร ฤร o tแบกo;\nd) ฤฦฐแปฃc sแปฑ ฤแปng รฝ cแปงa
hiแปu trฦฐแปng cฦก sแป ฤร o tแบกo xin chuyแปn ฤi vร cฦก sแป ฤร o tแบกo xin chuyแปn ฤแบฟn.\n3. Sinh
viรชn ฤฦฐแปฃc xem xรฉt chuyแปn tแปซ ฤร o tแบกo theo hรฌnh thแปฉc chรญnh quy sang hรฌnh thแปฉc vแปซa
lร m vแปซa hแปc hoแบทc ฤร o tแบกo tแปซ xa cแปงa cฦก sแป ฤร o tแบกo nแบฟu cรฒn ฤแปง thแปi gian hแปc tแบญp
theo quy ฤแปnh ฤแปi vแปi hรฌnh thแปฉc chuyแปn ฤแบฟn.\n4. Quy chแบฟ cแปงa cฦก sแป ฤร o tแบกo quy
ฤแปnh chi tiแบฟt thแบฉm quyแปn, ฤiแปu kiแปn, thแปง tแปฅc chuyแปn chฦฐฦกng trรฌnh, ngร nh ฤร o tแบกo,
chuyแปn nฦกi hแปc, chuyแปn cฦก sแป ฤร o tแบกo hoแบทc chuyแปn hรฌnh thแปฉc hแปc; viแปc cรดng nhแบญn
kแบฟt quแบฃ hแปc tแบญp hoแบทc chuyแปn ฤแปi tรญn chแป ฤรฃ tรญch lลฉy ฤแปi cho sinh viรชn thuแปc cรกc
trฦฐแปng hแปฃp nร y.'']'
sentences:
- ฤiแปu kiแปn ฤแป ฤฦฐแปฃc chuyแปn ngร nh, chuyแปn nฦกi hแปc, chuyแปn cฦก sแป ฤร o tแบกo, chuyแปn hรฌnh
thแปฉc hแปc ฤแปi vแปi sinh viรชn?
- Chi hแป trแปฃ hแปc nghแป cho ngฦฐแปi sau cai nghiแปn ma tรบy ฤฦฐแปฃc thแปฑc hiแปn nhฦฐ thแบฟ nร o?
- Nhiแปm vแปฅ cแปงa Hiแปp hแปi Nhiรชn liแปu sinh hแปc Viแปt Nam lร gรฌ?
- source_sentence: "['\"4. Thแปง tแปฅc chแปฉng thแปฑc chแปฏ kรฝ quy ฤแปnh tแบกi Khoแบฃn 1, 2 vร 3\
\ ฤiแปu nร y cลฉng ฤฦฐแปฃc รกp dแปฅng ฤแปi vแปi cรกc trฦฐแปng hแปฃp sau ฤรขy:\\na) Chแปฉng thแปฑc chแปฏ\
\ kรฝ cแปงa nhiแปu ngฦฐแปi trong cรนng mแปt giแบฅy tแป, vฤn bแบฃn;\\nb) Chแปฉng thแปฑc chแปฏ kรฝ cแปงa\
\ ngฦฐแปi khai lรฝ lแปch cรก nhรขn;\\nc) Chแปฉng thแปฑc chแปฏ kรฝ trong giแบฅy tแป, vฤn bแบฃn do\
\ cรก nhรขn tแปฑ lแบญp theo quy ฤแปnh cแปงa phรกp luแบญt;\\nd) Chแปฉng thแปฑc chแปฏ kรฝ trong Giแบฅy\
\ แปงy quyแปn ฤแปi vแปi trฦฐแปng hแปฃp แปงy quyแปn khรดng cรณ thรน lao, khรดng cรณ nghฤฉa vแปฅ bแปi\
\ thฦฐแปng cแปงa bรชn ฤฦฐแปฃc แปงy quyแปn vร khรดng liรชn quan ฤแบฟn viแปc chuyแปn quyแปn sแป hแปฏu\
\ tร i sแบฃn, quyแปn sแปญ dแปฅng bแบฅt ฤแปng sแบฃn.\"'\n '\"ฤiแปu 24. Thแปง tแปฅc chแปฉng thแปฑc chแปฏ\
\ kรฝ\\n2. Ngฦฐแปi thแปฑc hiแปn chแปฉng thแปฑc kiแปm tra giแบฅy tแป yรชu cแบงu chแปฉng thแปฑc, nแบฟu\
\ thแบฅy ฤแปง giแบฅy tแป theo quy ฤแปnh tแบกi Khoแบฃn 1 ฤiแปu nร y, tแบกi thแปi ฤiแปm chแปฉng thแปฑc,\
\ ngฦฐแปi yรชu cแบงu chแปฉng thแปฑc minh mแบซn, nhแบญn thแปฉc vร lร m chแปง ฤฦฐแปฃc hร nh vi cแปงa mรฌnh\
\ vร viแปc chแปฉng thแปฑc khรดng thuแปc cรกc trฦฐแปng hแปฃp quy ฤแปnh tแบกi ฤiแปu 25 cแปงa Nghแป\
\ ฤแปnh nร y thรฌ yรชu cแบงu ngฦฐแปi yรชu cแบงu chแปฉng thแปฑc kรฝ trฦฐแปc mแบทt vร thแปฑc hiแปn chแปฉng\
\ thแปฑc nhฦฐ sau:\\na) Ghi ฤแบงy ฤแปง lแปi chแปฉng chแปฉng thแปฑc chแปฏ kรฝ theo mแบซu quy ฤแปnh;\\\
nb) Kรฝ, ghi rรต hแป tรชn, ฤรณng dแบฅu cแปงa cฦก quan, tแป chแปฉc thแปฑc hiแปn chแปฉng thแปฑc vร ghi\
\ vร o sแป chแปฉng thแปฑc.\\nฤแปi vแปi giแบฅy tแป, vฤn bแบฃn cรณ tแปซ (02) hai trang trแป lรชn thรฌ\
\ ghi lแปi chแปฉng vร o trang cuแปi, nแบฟu giแบฅy tแป, vฤn bแบฃn cรณ tแปซ 02 (hai) tแป trแป lรชn\
\ thรฌ phแบฃi ฤรณng dแบฅu giรกp lai.\"']"
sentences:
- Bรญ thฦฐ Thฦฐแปng trแปฑc Trung ฦฐฦกng ฤoร n Thanh niรชn Cแปng sแบฃn Hแป Chรญ Minh ฤฦฐแปฃc nhแบญn mแปฉc
phแปฅ cแบฅp phแปฅc vแปฅ bao nhiรชu?
- ฤแปnh giรก lแบกi tร i sแบฃn lแบงn thแปฉ hai trong vแปฅ รกn hรฌnh sแปฑ ฤฦฐแปฃc thแปฑc hiแปn khi nร o?
- Chแปฉng thแปฑc chแปฏ kรฝ cho giแบฅy uแปท quyแปn sแบฝ ฤฦฐแปฃc thแปฑc hiแปn nhฦฐ thแบฟ nร o?
- source_sentence: '[''Mแปฉc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน\n1. Phแบกm nhรขn bแป phแบกt
tรน chung thรขn, lแบงn ฤแบงu ฤฦฐแปฃc giแบฃm xuแปng ba mฦฐฦกi nฤm.\n2. Phแบกm nhรขn bแป phแบกt tรน tแปซ
ba mฦฐฦกi nฤm trแป xuแปng, mแปi lแบงn cรณ thแป ฤฦฐแปฃc giแบฃm tแปซ mแปt thรกng ฤแบฟn ba nฤm. Trฦฐแปng
hแปฃp ฤฦฐแปฃc giแบฃm ba nฤm phแบฃi lร nhแปฏng phแบกm nhรขn chแบฅp hร nh nghiรชm chแปnh Nแปi quy trแบกi
giam, trแบกi tแบกm giam, nhร tแบกm giแปฏ vร lแบญp cรดng hoแบทc cรณ thร nh tรญch ฤแบทc biแปt xuแบฅt
sแบฏc trong lao ฤแปng, hแปc tแบญp cแบฃi tแบกo.\n3. Mแปi nฤm mแปt phแบกm nhรขn chแป ฤฦฐแปฃc xรฉt giแบฃm
thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน mแปt lแบงn, khoแบฃng cรกch giแปฏa hai lแบงn xรฉt giแบฃm รญt nhแบฅt
lร mแปt nฤm. Trฦฐแปng hแปฃp ฤรฃ ฤฦฐแปฃc giแบฃm mร thแปi hแบกn tรน cรฒn lแบกi khรดng ฤแปง mแปt nฤm thรฌ
nฤm tiแบฟp theo cรณ thแป ฤแป nghแป xรฉt giแบฃm sแปm hฦกn trฦฐแปc mแปt ฤแปฃt, nhฦฐng vแบซn phแบฃi bแบฃo
ฤแบฃm mแปi nฤm chแป ฤฦฐแปฃc xรฉt giแบฃm mแปt lแบงn.\nTrฦฐแปng hแปฃp sau khi ฤรฃ ฤฦฐแปฃc giแบฃm thแปi hแบกn
mร cรณ lรฝ do ฤแบทc biแปt ฤรกng ฤฦฐแปฃc khoan hแปng nhฦฐ lแบญp cรดng hoแบทc mแบฏc bแปnh hiแปm nghรจo
thรฌ cรณ thแป ฤฦฐแปฃc xรฉt giแบฃm thรชm nhฦฐng khรดng ฤฦฐแปฃc quรก hai lแบงn trong mแปt nฤm.\n4.
Mแปi phแบกm nhรขn cรณ thแป ฤฦฐแปฃc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน nhiแปu lแบงn, nhฦฐng
phแบฃi bแบฃo ฤแบฃm thแปi hแบกn thแปฑc tแบฟ chแบฅp hร nh รกn phแบกt tรน ฤฦฐแปฃc mแปt phแบงn hai mแปฉc hรฌnh
phแบกt tรน cรณ thแปi hแบกn ฤรฃ tuyรชn hoแบทc hai mฦฐฦกi nฤm ฤแปi vแปi hรฌnh phแบกt tรน chung thรขn.'']'
sentences:
- Mแปi nฤm thรฌ phแบกm nhรขn ฤฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน bao nhiรชu lแบงn?
- Giรกm ฤแปc Quแปน bแบฃo tแปn di sแบฃn Huแบฟ do ai bแป nhiแปm?
- Chแบฅp hร nh viรชn cรณ bแบฏt buแปc kรฝ tรชn vร o vฤn bแบฃn thแปa thuแบญn thi hร nh รกn dรขn sแปฑ cแปงa
ฤฦฐฦกng sแปฑ hay khรดng?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.2955801104972376
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48920140632847814
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5747530554160388
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6760421898543445
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2955801104972376
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16306713544282603
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11495061108320775
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06760421898543445
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2955801104972376
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.48920140632847814
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5747530554160388
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6760421898543445
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.477230404285928
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.41460005872989236
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.42407099092866546
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.29449188012723926
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4896199564707852
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5724928846475807
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6713544282605056
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.29449188012723926
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1632066521569284
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11449857692951614
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06713544282605056
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.29449188012723926
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4896199564707852
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5724928846475807
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6713544282605056
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4743515215291094
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.41222767666137783
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4218120045923118
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.28511635693956133
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4783191026284949
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5605223505775992
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6628997153859032
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.28511635693956133
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15943970087616496
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11210447011551983
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06628997153859031
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.28511635693956133
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4783191026284949
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5605223505775992
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6628997153859032
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4650207581954583
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.40272748532417074
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4121698601916915
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.2735643730118868
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4610748367654445
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.543529214799933
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6400468776159384
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2735643730118868
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15369161225514816
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1087058429599866
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06400468776159383
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2735643730118868
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4610748367654445
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.543529214799933
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6400468776159384
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4483492533628726
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.387943762805642
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3975600153943611
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.2466097438473129
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.42005692281935375
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.49891176963000167
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5950108823037
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2466097438473129
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1400189742731179
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09978235392600034
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.059501088230369995
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2466097438473129
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.42005692281935375
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.49891176963000167
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5950108823037
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4117058390410184
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.35411208905684183
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.36371800437559065
name: Cosine Map@100
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 7fc06782350c1a83f88b15dd4b38ef853d3b8503 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("minhdang/gte-base-law-matryoshka")
# Run inference
sentences = [
"['Mแปฉc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน\\n1. Phแบกm nhรขn bแป phแบกt tรน chung thรขn, lแบงn ฤแบงu ฤฦฐแปฃc giแบฃm xuแปng ba mฦฐฦกi nฤm.\\n2. Phแบกm nhรขn bแป phแบกt tรน tแปซ ba mฦฐฦกi nฤm trแป xuแปng, mแปi lแบงn cรณ thแป ฤฦฐแปฃc giแบฃm tแปซ mแปt thรกng ฤแบฟn ba nฤm. Trฦฐแปng hแปฃp ฤฦฐแปฃc giแบฃm ba nฤm phแบฃi lร nhแปฏng phแบกm nhรขn chแบฅp hร nh nghiรชm chแปnh Nแปi quy trแบกi giam, trแบกi tแบกm giam, nhร tแบกm giแปฏ vร lแบญp cรดng hoแบทc cรณ thร nh tรญch ฤแบทc biแปt xuแบฅt sแบฏc trong lao ฤแปng, hแปc tแบญp cแบฃi tแบกo.\\n3. Mแปi nฤm mแปt phแบกm nhรขn chแป ฤฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน mแปt lแบงn, khoแบฃng cรกch giแปฏa hai lแบงn xรฉt giแบฃm รญt nhแบฅt lร mแปt nฤm. Trฦฐแปng hแปฃp ฤรฃ ฤฦฐแปฃc giแบฃm mร thแปi hแบกn tรน cรฒn lแบกi khรดng ฤแปง mแปt nฤm thรฌ nฤm tiแบฟp theo cรณ thแป ฤแป nghแป xรฉt giแบฃm sแปm hฦกn trฦฐแปc mแปt ฤแปฃt, nhฦฐng vแบซn phแบฃi bแบฃo ฤแบฃm mแปi nฤm chแป ฤฦฐแปฃc xรฉt giแบฃm mแปt lแบงn.\\nTrฦฐแปng hแปฃp sau khi ฤรฃ ฤฦฐแปฃc giแบฃm thแปi hแบกn mร cรณ lรฝ do ฤแบทc biแปt ฤรกng ฤฦฐแปฃc khoan hแปng nhฦฐ lแบญp cรดng hoแบทc mแบฏc bแปnh hiแปm nghรจo thรฌ cรณ thแป ฤฦฐแปฃc xรฉt giแบฃm thรชm nhฦฐng khรดng ฤฦฐแปฃc quรก hai lแบงn trong mแปt nฤm.\\n4. Mแปi phแบกm nhรขn cรณ thแป ฤฦฐแปฃc giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน nhiแปu lแบงn, nhฦฐng phแบฃi bแบฃo ฤแบฃm thแปi hแบกn thแปฑc tแบฟ chแบฅp hร nh รกn phแบกt tรน ฤฦฐแปฃc mแปt phแบงn hai mแปฉc hรฌnh phแบกt tรน cรณ thแปi hแบกn ฤรฃ tuyรชn hoแบทc hai mฦฐฦกi nฤm ฤแปi vแปi hรฌnh phแบกt tรน chung thรขn.']",
'Mแปi nฤm thรฌ phแบกm nhรขn ฤฦฐแปฃc xรฉt giแบฃm thแปi hแบกn chแบฅp hร nh รกn phแบกt tรน bao nhiรชu lแบงn?',
'Chแบฅp hร nh viรชn cรณ bแบฏt buแปc kรฝ tรชn vร o vฤn bแบฃn thแปa thuแบญn thi hร nh รกn dรขn sแปฑ cแปงa ฤฦฐฦกng sแปฑ hay khรดng?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2956 |
| cosine_accuracy@3 | 0.4892 |
| cosine_accuracy@5 | 0.5748 |
| cosine_accuracy@10 | 0.676 |
| cosine_precision@1 | 0.2956 |
| cosine_precision@3 | 0.1631 |
| cosine_precision@5 | 0.115 |
| cosine_precision@10 | 0.0676 |
| cosine_recall@1 | 0.2956 |
| cosine_recall@3 | 0.4892 |
| cosine_recall@5 | 0.5748 |
| cosine_recall@10 | 0.676 |
| cosine_ndcg@10 | 0.4772 |
| cosine_mrr@10 | 0.4146 |
| **cosine_map@100** | **0.4241** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2945 |
| cosine_accuracy@3 | 0.4896 |
| cosine_accuracy@5 | 0.5725 |
| cosine_accuracy@10 | 0.6714 |
| cosine_precision@1 | 0.2945 |
| cosine_precision@3 | 0.1632 |
| cosine_precision@5 | 0.1145 |
| cosine_precision@10 | 0.0671 |
| cosine_recall@1 | 0.2945 |
| cosine_recall@3 | 0.4896 |
| cosine_recall@5 | 0.5725 |
| cosine_recall@10 | 0.6714 |
| cosine_ndcg@10 | 0.4744 |
| cosine_mrr@10 | 0.4122 |
| **cosine_map@100** | **0.4218** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2851 |
| cosine_accuracy@3 | 0.4783 |
| cosine_accuracy@5 | 0.5605 |
| cosine_accuracy@10 | 0.6629 |
| cosine_precision@1 | 0.2851 |
| cosine_precision@3 | 0.1594 |
| cosine_precision@5 | 0.1121 |
| cosine_precision@10 | 0.0663 |
| cosine_recall@1 | 0.2851 |
| cosine_recall@3 | 0.4783 |
| cosine_recall@5 | 0.5605 |
| cosine_recall@10 | 0.6629 |
| cosine_ndcg@10 | 0.465 |
| cosine_mrr@10 | 0.4027 |
| **cosine_map@100** | **0.4122** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2736 |
| cosine_accuracy@3 | 0.4611 |
| cosine_accuracy@5 | 0.5435 |
| cosine_accuracy@10 | 0.64 |
| cosine_precision@1 | 0.2736 |
| cosine_precision@3 | 0.1537 |
| cosine_precision@5 | 0.1087 |
| cosine_precision@10 | 0.064 |
| cosine_recall@1 | 0.2736 |
| cosine_recall@3 | 0.4611 |
| cosine_recall@5 | 0.5435 |
| cosine_recall@10 | 0.64 |
| cosine_ndcg@10 | 0.4483 |
| cosine_mrr@10 | 0.3879 |
| **cosine_map@100** | **0.3976** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2466 |
| cosine_accuracy@3 | 0.4201 |
| cosine_accuracy@5 | 0.4989 |
| cosine_accuracy@10 | 0.595 |
| cosine_precision@1 | 0.2466 |
| cosine_precision@3 | 0.14 |
| cosine_precision@5 | 0.0998 |
| cosine_precision@10 | 0.0595 |
| cosine_recall@1 | 0.2466 |
| cosine_recall@3 | 0.4201 |
| cosine_recall@5 | 0.4989 |
| cosine_recall@10 | 0.595 |
| cosine_ndcg@10 | 0.4117 |
| cosine_mrr@10 | 0.3541 |
| **cosine_map@100** | **0.3637** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 107,510 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:--------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 282.01 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.95 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>['ฤแปi tฦฐแปฃng liรชn kแบฟt giรกo dแปฅc\nCฦก sแป giรกo dแปฅc mแบงm non tฦฐ thแปฅc, cฦก sแป giรกo dแปฅc phแป thรดng tฦฐ thแปฅc cแปงa Viแปt Nam vร cฦก sแป giรกo dแปฅc hoแบกt ฤแปng hแปฃp phรกp แป nฦฐแปc ngoร i, ฤฦฐแปฃc cฦก quan, tแป chแปฉc kiแปm ฤแปnh chแบฅt lฦฐแปฃng giรกo dแปฅc hoแบทc cฦก quan cรณ thแบฉm quyแปn cแปงa nฦฐแปc ngoร i cรดng nhแบญn vแป chแบฅt lฦฐแปฃng giรกo dแปฅc.']</code> | <code>Cฦก sแป giรกo dแปฅc phแป thรดng tฦฐ thแปฅc cแปงa Viแปt Nam cรณ phแบฃi lร ฤแปi tฦฐแปฃng liรชn kแบฟt giรกo dแปฅc vแปi nฦฐแปc ngoร i khรดng?</code> |
| <code>['Quyแบฟt ฤแปnh chแปง trฦฐฦกng ฤแบงu tฦฐ dแปฑ รกn PPP\n1. Nแปi dung quyแบฟt ฤแปnh chแปง trฦฐฦกng ฤแบงu tฦฐ dแปฑ รกn PPP thแปฑc hiแปn theo quy ฤแปnh tแบกi ฤiแปu 17 cแปงa Luแบญt PPP vร Mแบซu sแป 03 Phแปฅ lแปฅc II kรจm theo Nghแป ฤแปnh nร y.'<br> 'Nแปi dung quyแบฟt ฤแปnh chแปง trฦฐฦกng ฤแบงu tฦฐ dแปฑ รกn PPP\n1. Quyแบฟt ฤแปnh chแปง trฦฐฦกng ฤแบงu tฦฐ bao gแปm cรกc nแปi dung chแปง yแบฟu sau ฤรขy:\na) Tรชn dแปฑ รกn;\nb) Tรชn cฦก quan cรณ thแบฉm quyแปn;\nc) Mแปฅc tiรชu; dแปฑ kiแบฟn quy mรด, ฤแปa ฤiแปm, thแปi gian thแปฑc hiแปn dแปฑ รกn, nhu cแบงu sแปญ dแปฅng ฤแบฅt vร tร i nguyรชn khรกc;\nd) Dแปฑ kiแบฟn loแบกi hแปฃp ฤแปng dแปฑ รกn PPP;\nฤ) Sฦก bแป tแปng mแปฉc ฤแบงu tฦฐ; sฦก bแป phฦฐฦกng รกn tร i chรญnh: cฦก cแบฅu nguแปn vแปn trong dแปฑ รกn, dแปฑ kiแบฟn khung giรก, phรญ sแบฃn phแบฉm, dแปch vแปฅ cรดng ฤแปi vแปi dแปฑ รกn รกp dแปฅng cฦก chแบฟ thu phรญ trแปฑc tiแบฟp tแปซ ngฦฐแปi sแปญ dแปฅng;\ne) Cฦก chแบฟ bแบฃo ฤแบฃm ฤแบงu tฦฐ, cฦก chแบฟ chia sแบป phแบงn giแบฃm doanh thu.\n2. ฤแปi vแปi dแปฑ รกn แปฉng dแปฅng cรดng nghแป cao, แปฉng dแปฅng cรดng nghแป mแปi ngoร i quy ฤแปnh tแบกi khoแบฃn 1 ฤiแปu nร y, nแปi dung quyแบฟt ฤแปnh chแปง trฦฐฦกng ฤแบงu tฦฐ cรฒn bao gแปm tรชn bรชn mแปi thแบงu, hรฌnh thแปฉc lแปฑa chแปn nhร ฤแบงu tฦฐ, thแปi gian tแป chแปฉc lแปฑa chแปn nhร ฤแบงu tฦฐ.']</code> | <code>Quyแบฟt ฤแปnh chแปง trฦฐฦกng ฤแบงu tฦฐ dแปฑ รกn PPP cรณ nhแปฏng nแปi dung gรฌ?</code> |
| <code>['Hแปa sฤฉ hแบกng III - Mรฃ sแป: V.10.08.27\n...\n4. Yรชu cแบงu ฤแปi vแปi viรชn chแปฉc dแปฑ thi hoแบทc xรฉt thฤng hแบกng chแปฉc danh nghแป nghiแปp hแปa sฤฉ hแบกng III:\nCรณ thแปi gian giแปฏ chแปฉc danh nghแป nghiแปp hแปa sฤฉ hแบกng IV hoแบทc tฦฐฦกng ฤฦฐฦกng tแปซ ฤแปง 02 nฤm trแป lรชn (khรดng kแป thแปi gian tแบญp sแปฑ, thแปญ viแปc) ฤแปi vแปi trรฌnh ฤแป cao ฤแบณng hoแบทc tแปซ ฤแปง 03 nฤm trแป lรชn (khรดng kแป thแปi gian tแบญp sแปฑ, thแปญ viแปc) ฤแปi vแปi trรฌnh ฤแป trung cแบฅp. Trฦฐแปng hแปฃp cรณ thแปi gian tฦฐฦกng ฤฦฐฦกng thรฌ phแบฃi cรณ รญt nhแบฅt 01 nฤm (ฤแปง 12 thรกng) ฤang giแปฏ chแปฉc danh hแปa sฤฉ hแบกng IV tรญnh ฤแบฟn ngร y hแบฟt thแปi hแบกn nแปp hแป sฦก ฤฤng kรฝ dแปฑ thi hoแบทc xรฉt thฤng hแบกng.']</code> | <code>Viรชn chแปฉc xรฉt thฤng hแบกng chแปฉc danh nghแป nghiแปp hแปa sฤฉ hแบกng 3 cแบงn cรณ thแปi gian giแปฏ chแปฉc danh nghแป nghiแปp hแปa sฤฉ hแบกng 4 trong bao lรขu?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 11,946 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:--------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 291.08 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 24.16 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------|
| <code>['โฤiแปu 9. Sแปญ dแปฅng ฤแบฅt trแปng lรบa vร o mแปฅc ฤรญch khรกc khรดng ฤฦฐแปฃc cฦก quan nhร nฦฐแปc cรณ thแบฉm quyแปn cho phรฉp theo quy ฤแปnh tแบกi cรกc ฤiแปm a vร d khoแบฃn 1 ฤiแปu 57 cแปงa Luแบญt ฤแบฅt ฤai\n1. Chuyแปn ฤแบฅt trแปng lรบa sang ฤแบฅt trแปng cรขy lรขu nฤm, ฤแบฅt trแปng rแปซng (trแปซ trฦฐแปng hแปฃp quy ฤแปnh tแบกi khoแบฃn 7 ฤiแปu 14 cแปงa Nghแป ฤแปnh sแป 43/2014/Nฤ-CP ฤฦฐแปฃc sแปญa ฤแปi, bแป sung tแบกi khoแบฃn 11 ฤiแปu 2 cแปงa Nghแป ฤแปnh sแป 01/2017/Nฤ-CP) thรฌ hรฌnh thแปฉc vร mแปฉc xแปญ phแบกt nhฦฐ sau:\na) Phแบกt tiแปn tแปซ 2.000.000 ฤแปng ฤแบฟn 5.000.000 ฤแปng nแบฟu diแปn tรญch ฤแบฅt chuyแปn mแปฅc ฤรญch trรกi phรฉp dฦฐแปi 0,5 hรฉc ta;\nb) Phแบกt tiแปn tแปซ 5.000.000 ฤแปng ฤแบฟn 10.000.000 ฤแปng nแบฟu diแปn tรญch ฤแบฅt chuyแปn mแปฅc ฤรญch trรกi phรฉp tแปซ 0,5 hรฉc ta ฤแบฟn dฦฐแปi 01 hรฉc ta;\nc) Phแบกt tiแปn tแปซ 10.000.000 ฤแปng ฤแบฟn 20.000.000 ฤแปng nแบฟu diแปn tรญch ฤแบฅt chuyแปn mแปฅc ฤรญch trรกi phรฉp tแปซ 01 hรฉc ta ฤแบฟn dฦฐแปi 03 hรฉc ta;\nd) Phแบกt tiแปn tแปซ 20.000.000 ฤแปng ฤแบฟn 50.000.000 ฤแปng nแบฟu diแปn tรญch ฤแบฅt chuyแปn mแปฅc ฤรญch trรกi phรฉp tแปซ 03 hรฉc ta trแป lรชn.โ']</code> | <code>Tแปฑ รฝ trแปng cรขy lรขu nฤm trรชn ฤแบฅt lรบa bแป xแปญ phแบกt nhฦฐ thแบฟ nร o?</code> |
| <code>['"3. Ngฦฐแปi lร m chแปฉng cรณ quyแปn:\na) ฤฦฐแปฃc thรดng bรกo, giแบฃi thรญch quyแปn vร nghฤฉa vแปฅ quy ฤแปnh tแบกi ฤiแปu nร y;\nb) Yรชu cแบงu cฦก quan triแปu tแบญp bแบฃo vแป tรญnh mแบกng, sแปฉc khoแบป, danh dแปฑ, nhรขn phแบฉm, tร i sแบฃn vร quyแปn, lแปฃi รญch hแปฃp phรกp khรกc cแปงa mรฌnh, ngฦฐแปi thรขn thรญch cแปงa mรฌnh khi biฬฃ ฤe doฬฃa;\nc) Khiแบฟu nแบกi quyแบฟt ฤแปnh, hร nh vi tแป tแปฅng cแปงa cฦก quan, ngฦฐแปi cรณ thแบฉm quyแปn tiแบฟn hร nh tแป tแปฅng liรชn quan ฤแบฟn viแปc mรฌnh tham gia lร m chแปฉng;\nd) ฤฦฐแปฃc cฦก quan triแปu tแบญp thanh toรกn chi phรญ ฤi lแบกi vร nhแปฏng chi phรญ khรกc theo quy ฤแปnh cแปงa phรกp luแบญt."']</code> | <code>Quyแปn vร nghฤฉa vแปฅ cแปงa ngฦฐแปi lร m chแปฉng?</code> |
| <code>['Quy trรฌnh ฤiแปu chuyแปn tร i sแบฃn\n1. Hแป sฦก ฤแป nghแป ฤiแปu chuyแปn tร i sแบฃn:\na) Vฤn bแบฃn ฤแป nghแป ฤiแปu chuyแปn tร i sแบฃn cแปงa ฤฦกn vแป ฤฦฐแปฃc giao quแบฃn lรฝ, sแปญ dแปฅng tร i sแบฃn: 01 bแบฃn chรญnh;\nb) Vฤn bแบฃn ฤแป nghแป ฤฦฐแปฃc tiแบฟp nhแบญn tร i sแบฃn cแปงa cฦก quan, tแป chแปฉc, ฤฦกn vแป: 01 bแบฃn chรญnh;\nc) Tแป trรฌnh vแป viแปc ฤiแปu chuyแปn, tiแบฟp nhแบญn tร i sแบฃn cแปงa Vแปฅ Tร i chรญnh - Kแบฟ toรกn (trฦฐแปng hแปฃp viแปc quyแบฟt ฤแปnh ฤiแปu chuyแปn tร i sแบฃn thuแปc thแบฉm quyแปn cแปงa Phรณ Thแปng ฤแปc phแปฅ trรกch tร i chรญnh - kแบฟ toรกn): 01 bแบฃn chรญnh;\nd) Danh mแปฅc tร i sแบฃn ฤแป nghแป ฤiแปu chuyแปn (chแปงng loแบกi, mรฃ tร i sแบฃn, sแป lฦฐแปฃng, tรฌnh trแบกng; nฤm ฤฦฐa vร o sแปญ dแปฅng, nguyรชn giรก, giรก trแป cรฒn lแบกi theo sแป kแบฟ toรกn; mแปฅc ฤรญch sแปญ dแปฅng hiแปn tแบกi vร mแปฅc ฤรญch sแปญ dแปฅng dแปฑ kiแบฟn sau khi ฤiแปu chuyแปn trong trฦฐแปng hแปฃp viแปc ฤiแปu chuyแปn gแบฏn vแปi viแปc chuyแปn ฤแปi cรดng nฤng sแปญ dแปฅng tร i sแบฃn; lรฝ do ฤiแปu chuyแปn): 01 bแบฃn chรญnh;\nฤ) Cรกc hแป sฦก khรกc cรณ liรชn quan ฤแบฟn ฤแป nghแป ฤiแปu chuyแปn tร i sแบฃn (nแบฟu cรณ): 01 bแบฃn sao.\n2. Khi ฤiแปu chuyแปn, ฤฦกn vแป giao vร ฤฦกn vแป nhแบญn tร i sแบฃn phแบฃi thร nh lแบญp Hแปi ฤแปng giao nhแบญn tร i sแบฃn, gแปm ฤแบกi diแปn cแปงa hai bรชn, chแปง tแปch hแปi ฤแปng lร ฤแบกi diแปn lรฃnh ฤแบกo bรชn giao. Hแปi ฤแปng cรณ nhiแปm vแปฅ xรกc ฤแปnh sแป lฦฐแปฃng, giรก trแป (nguyรชn giรก, giรก trแป ฤรฃ khแบฅu hao, giรก trแป cรฒn lแบกi), hiแปn trแบกng cแปงa tร i sแบฃn bร n giao, cรกc hแป sฦก, chแปฉng tแปซ cรณ liรชn quan vร lแบญp "Biรชn bแบฃn bร n giao, tiแบฟp nhแบญn tร i sแบฃn" theo Mแบซu sแป 01/TSC-BBGN ban hร nh kรจm theo Nghแป ฤแปnh sแป 151/2017/Nฤ-CP ngร y 26/12/2017 quy ฤแปnh chi tiแบฟt mแปt sแป ฤiแปu cแปงa Luแบญt Quแบฃn lรฝ, sแปญ dแปฅng tร i sแบฃn cรดng. "Biรชn bแบฃn bร n giao, tiแบฟp nhแบญn tร i sแบฃn" ฤฦฐแปฃc lแบญp thร nh 3 bแบฃn, mแปi bรชn lฦฐu mแปt bแบฃn vร gแปญi mแปt bแบฃn vแป Ngรขn hร ng Nhร nฦฐแปc (Vแปฅ Tร i chรญnh - Kแบฟ toรกn).\n...']</code> | <code>Hแป sฦก ฤแป nghแป ฤiแปu chuyแปn tร i sแบฃn cแปงa Ngรขn hร ng Nhร nฦฐแปc gแปm nhแปฏng nแปi dung gรฌ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 32
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.3810 | 10 | 4.0758 | - | - | - | - | - | - |
| 0.7619 | 20 | 2.6578 | - | - | - | - | - | - |
| **0.9905** | **26** | **-** | **1.6008** | **0.3976** | **0.4122** | **0.4218** | **0.3637** | **0.4241** |
| 1.1429 | 30 | 1.643 | - | - | - | - | - | - |
| 1.5238 | 40 | 1.2561 | - | - | - | - | - | - |
| 1.9048 | 50 | 1.1152 | - | - | - | - | - | - |
| 1.9810 | 52 | - | 1.0635 | 0.3976 | 0.4122 | 0.4218 | 0.3637 | 0.4241 |
| 2.2857 | 60 | 0.9883 | - | - | - | - | - | - |
| 2.6667 | 70 | 0.991 | - | - | - | - | - | - |
| 2.9714 | 78 | - | 0.9924 | 0.3976 | 0.4122 | 0.4218 | 0.3637 | 0.4241 |
| 3.0476 | 80 | 0.9552 | - | - | - | - | - | - |
| 3.4286 | 90 | 0.934 | - | - | - | - | - | - |
| 3.8095 | 100 | 0.9597 | - | - | - | - | - | - |
| 3.9619 | 104 | - | 0.9883 | 0.3976 | 0.4122 | 0.4218 | 0.3637 | 0.4241 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.0.1
- Datasets: 2.19.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/Breeze-7B-Cantonese-v0.1-GGUF | mradermacher | 2024-11-01T09:10:32Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"cantonese",
"yue",
"hong kong",
"้ฆๆธฏ",
"ๅปฃๆฑ่ฉฑ",
"็ฒต่ช",
"zh",
"en",
"dataset:hon9kon9ize/yue-alpaca",
"dataset:indiejoseph/wikipedia-translate-zhhk-zhcn",
"dataset:indiejoseph/wikipedia-zh-yue-summaries",
"dataset:indiejoseph/wikipedia-zh-yue-qa",
"base_model:kennylam/Breeze-7B-Cantonese-v0.1",
"base_model:quantized:kennylam/Breeze-7B-Cantonese-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T08:55:47Z | ---
base_model: kennylam/Breeze-7B-Cantonese-v0.1
datasets:
- hon9kon9ize/yue-alpaca
- indiejoseph/wikipedia-translate-zhhk-zhcn
- indiejoseph/wikipedia-zh-yue-summaries
- indiejoseph/wikipedia-zh-yue-qa
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- cantonese
- yue
- hong kong
- ้ฆๆธฏ
- ๅปฃๆฑ่ฉฑ
- ็ฒต่ช
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kennylam/Breeze-7B-Cantonese-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.f16.gguf) | f16 | 15.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
allistair99/MobileBERT-uncased-squad-v1-BiLSTM-finetuned-squad-fc1-resize-output3-dropout02 | allistair99 | 2024-11-01T08:57:15Z | 5 | 0 | null | [
"safetensors",
"mobilebert",
"generated_from_trainer",
"base_model:csarron/mobilebert-uncased-squad-v1",
"base_model:finetune:csarron/mobilebert-uncased-squad-v1",
"license:mit",
"region:us"
] | null | 2024-11-01T08:57:02Z | ---
license: mit
base_model: csarron/mobilebert-uncased-squad-v1
tags:
- generated_from_trainer
model-index:
- name: MobileBERT-uncased-squad-v1-BiLSTM-finetuned-squad-fc1-resize-output3-dropout02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MobileBERT-uncased-squad-v1-BiLSTM-finetuned-squad-fc1-resize-output3-dropout02
This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v1](https://huggingface.co/csarron/mobilebert-uncased-squad-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 60
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.56 | 1.0 | 14619 | 1.0480 |
| 0.5468 | 2.0 | 29238 | 1.0333 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.5.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-12b-deduped-int3-step4-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-01T08:52:11Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-01T08:41:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coastalcph/CLIPDetail-8590864 | coastalcph | 2024-11-01T08:49:15Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2024-11-01T08:48:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/French-Aya-Expanse-8B-GGUF | mradermacher | 2024-11-01T08:46:11Z | 66 | 0 | transformers | [
"transformers",
"gguf",
"fr",
"dataset:Svngoku/french-multilingual-reward-bench-dpo",
"base_model:Svngoku/French-Aya-Expanse-8B",
"base_model:quantized:Svngoku/French-Aya-Expanse-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T05:31:40Z | ---
base_model: Svngoku/French-Aya-Expanse-8B
datasets:
- Svngoku/french-multilingual-reward-bench-dpo
language:
- fr
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Svngoku/French-Aya-Expanse-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/French-Aya-Expanse-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/French-Aya-Expanse-8B-GGUF/resolve/main/French-Aya-Expanse-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hidonbush/paper-cutting | hidonbush | 2024-11-01T08:34:36Z | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"en",
"zh",
"dataset:hidonbush/paper-cuttingv0.1",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"endpoints_compatible",
"region:us"
] | null | 2024-10-30T07:26:22Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: paper-cutting
results: []
datasets:
- hidonbush/paper-cuttingv0.1
language:
- en
- zh
metrics:
- accuracy
base_model:
- nvidia/mit-b5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paper-cutting
This model was a finetuned version of nvidia/mit-b5 on the paper-cutting datasetv0.1.
It was trained to extract body contents from any resources like articles and books, just like cutting them off the paper.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
paper-cutting v0.1
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0 |
life/retrofuturereality | life | 2024-11-01T08:27:10Z | 18 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-01T08:27:03Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A person in a bustling cafe retrofuturereality
output:
url: samples/1730449588335__000001000_0.jpg
- text: a white spaceship in the middle of a space station, with a watermark in
the top right corner. The spaceship appears to be in the process of being
built, as evidenced by the various tools and materials scattered around it.
retrofuturereality
output:
url: samples/1730449604541__000001000_1.jpg
- text: a man and woman standing next to each other in a room, smiling. The woman
is wearing a necklace and the man is wearing formal dress. In the background,
there are a number of people and lights retrofuturereality
output:
url: samples/1730449620769__000001000_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: retrofuturereality
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# retrofuturereality
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `retrofuturereality` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/life/retrofuturereality/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('life/retrofuturereality', weight_name='retrofuturereality.safetensors')
image = pipeline('A person in a bustling cafe retrofuturereality').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
yifanlu/waymo-controlnet-flux | yifanlu | 2024-11-01T08:24:01Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"flux",
"flux-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-01T08:13:28Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
inference: true
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-yifanlu/waymo-controlnet-flux
These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning.
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
gitgato/tessy-LoRA | gitgato | 2024-11-01T08:20:26Z | 46 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-09-25T01:56:33Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: photo of tessy a beautiful woman
parameters:
negative_prompt: Low quality
output:
url: images/Imagen de WhatsApp 2024-09-24 a las 13.59.54_6e906e0c.jpg
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: tessy
license: mit
---
# tessy-LoRA
<Gallery />
## Model description
Janesde

## Trigger words
You should use `tessy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gitgato/tessy-LoRA/tree/main) them in the Files & versions tab. |
Natthaphon/thaicapgen-swin-gpt2 | Natthaphon | 2024-11-01T08:16:20Z | 39 | 0 | null | [
"safetensors",
"clip-encoder-decoder",
"image-to-text",
"image-captioning",
"custom_code",
"th",
"region:us"
] | image-to-text | 2024-11-01T07:57:46Z | ---
tags:
- image-to-text
- image-captioning
language:
- th
---
# Thai Image Captioning
Encoder-decoder style image captioning model using [Swin-L](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) and [GPT2](https://huggingface.co/openai-community/gpt2). Trained on Thai language MSCOCO and IPU24 dataset.
# Usage
With `VisionEncoderDecoderModel`.
```python
from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
device = 'cuda'
gen_kwargs = {"max_length": 120, "num_beams": 4}
model_path = 'Natthaphon/thaicapgen-swin-gpt2'
feature_extractor = AutoImageProcessor.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = VisionEncoderDecoderModel.from_pretrained(model_path).to(device)
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
You can also use `AutoModel` to load it. But this requires `trust_remote_code=True`.
```python
from transformers import AutoModel
model_path = 'Natthaphon/thaicapgen-swin-gpt2'
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)
```
# Acknowledgement
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107] |
prkhar05/pixart-personal-model-msteps | prkhar05 | 2024-11-01T08:11:35Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:PixArt-alpha/PixArt-XL-2-512x512",
"base_model:adapter:PixArt-alpha/PixArt-XL-2-512x512",
"region:us"
] | null | 2024-11-01T06:31:51Z | ---
base_model: PixArt-alpha/PixArt-XL-2-512x512
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Mercuri/mrpapaelijah | Mercuri | 2024-11-01T08:11:17Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-01T07:56:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mrpapaelijah
---
# Mrpapaelijah
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mrpapaelijah` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Mercuri/mrpapaelijah', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
minhdang/bge-base-financial-matryoshka_pass_2 | minhdang | 2024-11-01T08:10:57Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:107510",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-01T08:10:37Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:107510
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: bkai-foundation-models/vietnamese-bi-encoder
widget:
- source_sentence: '[''Hรฌnh thแปฉc xแปญ phแบกt vร thแปi hiแปu xแปญ phแบกt vi phแบกm hร nh chรญnh\n...\n4.
Thแปi hiแปu xแปญ phแบกt vi phแบกm hร nh chรญnh ฤแปi vแปi lฤฉnh vแปฑc kinh doanh xแป sแป:\na) Thแปi
hiแปu xแปญ phแบกt vi phแบกm hร nh chรญnh trong lฤฉnh vแปฑc kinh doanh xแป sแป lร 01 nฤm.\nb)
ฤแปi vแปi hร nh vi vi phแบกm hร nh chรญnh trong lฤฉnh vแปฑc kinh doanh xแป sแป ฤang ฤฦฐแปฃc thแปฑc
hiแปn thรฌ thแปi hiแปu ฤฦฐแปฃc tรญnh tแปซ ngร y ngฦฐแปi cรณ thแบฉm quyแปn thi hร nh cรดng vแปฅ phรกt
hiแปn hร nh vi vi phแบกm. ฤแปi vแปi hร nh vi vi phแบกm hร nh chรญnh ฤรฃ kแบฟt thรบc thรฌ thแปi
hiแปu ฤฦฐแปฃc tรญnh tแปซ ngร y chแบฅm dแปฉt hร nh vi vi phแบกm. Thแปi ฤiแปm chแบฅm dแปฉt hร nh vi vi
phแบกm ฤแป tรญnh thแปi hiแปu xแปญ phแบกt ฤแปi vแปi mแปt sแป hร nh vi vi phแบกm tแบกi Chฦฐฦกng 3 Nghแป
ฤแปnh nร y ฤฦฐแปฃc quy ฤแปnh nhฦฐ sau:\n- ฤแปi vแปi hร nh vi sแปญa chแปฏa, tแบฉy xoรก lร m thay
ฤแปi nแปi dung Giแบฅy chแปฉng nhแบญn ฤแปง ฤiแปu kiแปn kinh doanh, cรกc tร i liแปu trong hแป sฦก
ฤรฃ ฤฦฐแปฃc lร m ฤแบกi lรฝ xแป sแป quy ฤแปnh tแบกi khoแบฃn 1 ฤiแปu 35 vร khoแบฃn 1 ฤiแปu 41 Nghแป
ฤแปnh nร y nแบฟu khรดng xรกc ฤแปnh ฤฦฐแปฃc ngร y sแปญa chแปฏa, tแบฉy xoรก lร m thay ฤแปi nแปi dung
Giแบฅy chแปฉng nhแบญn ฤแปง ฤiแปu kiแปn kinh doanh, cรกc tร i liแปu trong hแป sฦก ฤรฃ ฤฦฐแปฃc lร m
ฤแบกi lรฝ xแป sแป thรฌ thแปi ฤiแปm chแบฅm dแปฉt hร nh vi vi phแบกm lร ngร y phรกt hiแปn Giแบฅy chแปฉng
nhแบญn ฤแปง ฤiแปu kiแปn kinh doanh bแป sแปญa chแปฏa, tแบฉy xรณa lร m thay ฤแปi nแปi dung;\n- ฤแปi
vแปi hร nh vi khรดng xรขy dแปฑng vร ban hร nh quy chแบฟ quy ฤแปnh chi tiแบฟt quy trรฌnh tแป
chแปฉc thu hแปi vรฉ xแป sแป khรดng tiรชu thแปฅ hแบฟt, khรดng xรขy dแปฑng vร cรดng bแป cรดng khai
thแป lแป quay sแป mแป thฦฐแปng, khรดng ban hร nh Quy chแบฟ quแบฃn lรฝ, khai thรกc dแปฏ liแปu mรกy
chแปง kinh doanh xแป sแป ฤiแปn toรกn quy ฤแปnh tแบกi khoแบฃn 1 ฤiแปu 40, khoแบฃn 1 ฤiแปu 44 vร
khoแบฃn 1 ฤiแปu 49 Nghแป ฤแปnh nร y, thแปi ฤiแปm chแบฅm dแปฉt hร nh vi vi phแบกm lร ngร y thแปฑc
hiแปn ban hร nh quy chแบฟ quy ฤแปnh chi tiแบฟt quy trรฌnh tแป chแปฉc thu hแปi vรฉ xแป sแป khรดng
tiรชu thแปฅ hแบฟt, cรดng bแป cรดng khai thแป lแป quay sแป mแป thฦฐแปng, ban hร nh Quy chแบฟ quแบฃn
lรฝ, khai thรกc dแปฏ liแปu mรกy chแปง kinh doanh xแป sแป ฤiแปn toรกn;\n- ฤแปi vแปi hร nh vi vi
phแบกm quy ฤแปnh vแป chแบฟ ฤแป bรกo cรกo quy ฤแปnh tแบกi ฤiแปu 51 Nghแป ฤแปnh nร y, thแปi ฤiแปm
chแบฅm dแปฉt hร nh vi vi phแบกm lร ngร y thแปฑc hiแปn bรกo cรกo.'']'
sentences:
- Hรฌnh thแปฉc ฤแบฅu giรก bแบฑng bแป phiแบฟu giรกn tiแบฟp ฤฦฐแปฃc phรกp luแบญt quy ฤแปnh nhฦฐ thแบฟ nร o?
- Thฦฐแปng trแปฑc Hแปi ฤแปng tฦฐ vแบฅn ฤแบทc xรก lร cฦก quan nร o?
- Thแปi hiแปu xแปญ phแบกt cฦก sแป kinh doanh xแป sแป phรกt hร nh vรฉ xแป sแป quรก hแบกn mแปฉc lร bao
lรขu?
- source_sentence: "['Thanh lรฝ hแปฃp ฤแปng thแปฑc hiแปn nhiแปm vแปฅ\\nCฤn cแปฉ Hแป sฦก ฤแป nghแป\
\ nghiแปm thu, thanh lรฝ hแปฃp ฤแปng thแปฑc hiแปn nhiแปm vแปฅ cแปงa cฦก quan chแปง trรฌ thแปฑc hiแปn,\
\ viแปc thanh lรฝ hแปฃp ฤแปng ฤรฃ kรฝ kแบฟt trong thแปi hแบกn 10 ngร y ฤฦฐแปฃc thแปฑc hiแปn kแป tแปซ\
\ ngร y cฦก quan quแบฃn lรฝ nhiแปm vแปฅ tiแบฟp nhแบญn ฤแบงy ฤแปง sแบฃn phแบฉm ฤรฃ ฤฦฐแปฃc chแปnh sแปญa theo\
\ รฝ kiแบฟn cแปงa Hแปi ฤแปng nghiแปm thu nhiแปm vแปฅ cแบฅp Bแป.\\nฤแปi vแปi cรกc nhiแปm vแปฅ thฦฐแปng\
\ xuyรชn hร ng nฤm quy ฤแปnh tแบกi ฤiแปm b, ฤiแปm h, ฤiแปm k khoแบฃn 1 ฤiแปu 3 Thรดng tฦฐ nร y\
\ ฤฦฐแปฃc cฦก quan quแบฃn lรฝ nhiแปm vแปฅ xรกc nhแบญn hoร n thร nh thรฌ vฤn bแบฃn xรกc nhแบญn hoร n\
\ thร nh nhiแปm vแปฅ lร cฤn cแปฉ nghiแปm thu, thanh lรฝ nhiแปm vแปฅ cแปงa cฦก quan chแปง trรฌ thแปฑc\
\ hiแปn.\\nBiรชn bแบฃn nghiแปm thu vร thanh lรฝ hแปฃp ฤแปng ฤแปi vแปi cรกc nhiแปm vแปฅ kรฝ hแปฃp\
\ ฤแปng thแปฑc hiแปn theo mแบซu B3a-HฤMT ฤฦฐแปฃc quy ฤแปnh tแบกi mแบซu B6a-BBTLMT. Biรชn bแบฃn\
\ nghiแปm thu vร thanh lรฝ hแปฃp ฤแปng ฤแปi vแปi cรกc nhiแปm vแปฅ kรฝ hแปฃp ฤแปng thแปฑc hiแปn theo\
\ mแบซu B3b-HฤBฤKH ฤฦฐแปฃc quy ฤแปnh tแบกi mแบซu B6b-BBTLBฤKH.'\n 'Thanh lรฝ hแปฃp ฤแปng nhiแปm\
\ vแปฅ bแบฃo vแป mรดi trฦฐแปng\\nCฤn cแปฉ Biรชn bแบฃn nghiแปm thu kแบฟt quแบฃ thแปฑc hiแปn nhiแปm vแปฅ\
\ bแบฃo vแป mรดi trฦฐแปng, viแปc thanh lรฝ hแปฃp ฤแปng ฤรฃ kรฝ kแบฟt vแปi ฤฦกn vแป chแปง trรฌ trong\
\ thแปi hแบกn 10 ngร y ฤฦฐแปฃc thแปฑc hiแปn kแป tแปซ ngร y tiแบฟp nhแบญn ฤแบงy ฤแปง sแบฃn phแบฉm ฤรฃ ฤฦฐแปฃc\
\ chแปnh sแปญa theo รฝ kiแบฟn cแปงa Hแปi ฤแปng nghiแปm thu nhiแปm vแปฅ bแบฃo vแป mรดi trฦฐแปng. Biรชn\
\ bแบฃn thanh lรฝ hแปฃp ฤแปng ฤฦฐแปฃc quy ฤแปnh tแบกi mแบซu B6a-BBTLHฤ-BCT.']"
sentences:
- Tแปn thฦฐฦกng gรขn chร y trฦฐแปc chแปง yแบฟu gแบทp trong cรกc vแบฟt thฦฐฦกng แป vรนng nร o?
- Hแปi ฤแปng Lรฝ luแบญn Trung ฦฐฦกng hแปp mแปi quรฝ mแบฅy lแบงn?
- Thแปi hแบกn thanh lรฝ hแปฃp ฤแปng nhiแปm vแปฅ bแบฃo vแป mรดi trฦฐแปng ngร nh Cรดng thฦฐฦกng sแปญ dแปฅng
nguแปn kinh phรญ sแปฑ nghiแปp mรดi trฦฐแปng lร bao lรขu?
- source_sentence: '[''ฤแปi tฦฐแปฃng รกp dแปฅng\n1. Cรกn bแป, cรดng chแปฉc cแปงa cรกc ฤฦกn vแป thuแปc
แปฆy ban Dรขn tแปc ฤฦฐแปฃc Bแป trฦฐแปng, Chแปง nhiแปm แปฆy ban Dรขn tแปc (sau ฤรขy gแปi tแบฏt lร Bแป
trฦฐแปng, Chแปง nhiแปm) giao nhiแปm vแปฅ hoแบทc phรขn cรดng lร m nhiแปm vแปฅ tiแบฟp cรดng dรขn, xแปญ
lรฝ ฤฦกn khiแบฟu nแบกi, tแป cรกo, kiแบฟn nghแป, phแบฃn รกnh tแบกi trแปฅ sแป vร cรกc ฤแปa ฤiแปm tiแบฟp
cรดng dรขn thuแปc แปฆy ban Dรขn tแปc.\n2. Bแป trฦฐแปng, Chแปง nhiแปm, cรกc Thแปฉ trฦฐแปng, Phรณ Chแปง
nhiแปm แปฆy ban Dรขn tแปc cรณ trรกch nhiแปm tiแบฟp cรดng dรขn ฤแปnh kแปณ hoแบทc ฤแปt xuแบฅt; cรดng
chแปฉc trong cรกc ฤฦกn vแป thuแปc แปฆy ban Dรขn tแปc ฤฦฐแปฃc Bแป trฦฐแปng, Chแปง nhiแปm triแปu tแบญp
lร m nhiแปm vแปฅ tiแบฟp cรดng dรขn, xแปญ lรฝ ฤฦกn khiแบฟu nแบกi, tแป cรกo, kiแบฟn nghแป, phแบฃn รกnh tแบกi
trแปฅ sแป vร cรกc ฤแปa ฤiแปm tiแบฟp cรดng dรขn thuแปc แปฆy ban Dรขn tแปc.\n3. Cรดng chแปฉc, ngฦฐแปi
tham gia tiแบฟp cรดng dรขn thuแปc แปฆy ban Dรขn tแปc ฤฦฐแปฃc Bแป trฦฐแปng, Chแปง nhiแปm giao nhiแปm
vแปฅ hoแบทc phรขn cรดng phแปi hแปฃp tiแบฟp cรดng dรขn, giแปฏ gรฌn an ninh, trแบญt tแปฑ, bแบฃo ฤแบฃm y
tแบฟ tแบกi trแปฅ sแป vร cรกc ฤแปa ฤiแปm tiแบฟp cรดng dรขn cแปงa แปฆy ban Dรขn tแปc.\n4. Cรกn bแป, cรดng
chแปฉc cแปงa cรกc tแป chแปฉc thuแปc แปฆy ban Dรขn tแปc ฤฦฐแปฃc Bแป trฦฐแปng, Chแปง nhiแปm giao nhiแปm
vแปฅ chuyรชn trรกch xแปญ lรฝ ฤฦกn khiแบฟu nแบกi, tแป cรกo, kiแบฟn nghแป, phแบฃn รกnh.'']'
sentences:
- Cรดng chแปฉc cแปงa ฤฦกn vแป cรณ ฤฦฐแปฃc hฦฐแปng chแบฟ ฤแป bแปi dฦฐแปกng khi nhแบญn nhiแปm vแปฅ tiแบฟp cรดng
dรขn tแบกi cรกc ฤแปa ฤiแปm tiแบฟp cรดng dรขn thuแปc แปฆy ban Dรขn tแปc hay khรดng?
- Ngฦฐแปi trรบng xแป sแป Vietlott cรณ ฤฦฐแปฃc bแบฃo mแบญt thรดng tin trฦฐแปc ฤแบกi chรบng?
- Viแปc cรดng bแป giรก trแป doanh nghiแปp ฤฦฐแปฃc cฦก quan ฤแบกi diแปn chแปง sแป hแปฏu thแปฑc hiแปn trong
thแปi hแบกn bao nhiรชu ngร y? Kแป tแปซ thแปi ฤiแปm nร o?
- source_sentence: '[''Hรฌnh thแปฉc tแป chแปฉc, nแปi dung vร chฦฐฦกng trรฌnh ฤร o tแบกo nghiแปp
vแปฅ thแบฉm ฤแปnh giรก\n1. Khรณa ฤร o tแบกo nghiแปp vแปฅ thแบฉm ฤแปnh giรก ฤฦฐแปฃc tแป chแปฉc tแบญp trung
mแปt kแปณ liรชn tแปฅc hoแบทc nhiแปu kแปณ nhฦฐng khรดng kรฉo dร i quรก 3 (ba) thรกng cho mแปt khรณa
hแปc vร phแบฃi ฤแบฃm bแบฃo dแบกy vร hแปc ฤแปง thแปi lฦฐแปฃng, nแปi dung vร chฦฐฦกng trรฌnh theo quy
ฤแปnh tแบกi khoแบฃn 2 ฤiแปu nร y.\n...'']'
sentences:
- Thแปi gian รกp dแปฅng biแปn phรกp cรกch ly y tแบฟ ฤฦฐแปฃc phรกp luแบญt quy ฤแปnh nhฦฐ thแบฟ nร o?
- Khi thแปฑc hiแปn khuyแบฟn mแบกi cung แปฉng dแปch vแปฅ thรดng tin di ฤแปng mแบซu ฤแป khรกch hร ng
dรนng thแปญ khรดng phแบฃi trแบฃ tiแปn, doanh nghiแปp viแป
n thรดng cรณ cแบงn ฤฤng kรฝ khuyแบฟn mแบกi
khรดng?
- Mแปt khรณa ฤร o tแบกo nghiแปp vแปฅ thแบฉm ฤแปnh giรก kรฉo dร i bao lรขu?
- source_sentence: '[''Tiรชu chuแบฉn Chi cแปฅc trฦฐแปng, Phรณ Chi cแปฅc trฦฐแปng thuแปc Cแปฅc Thuแบฟ\n1.
Vแป trรญ vร nhiแปm vแปฅ\na) Chi cแปฅc trฦฐแปng Chi cแปฅc Thuแบฟ lร ngฦฐแปi ฤแปฉng ฤแบงu Chi cแปฅc Thuแบฟ,
chแปu trรกch nhiแปm trฦฐแปc Cแปฅc trฦฐแปng Cแปฅc Thuแบฟ vร trฦฐแปc phรกp luแบญt vแป toร n bแป hoแบกt
ฤแปng nhiแปm vแปฅ cแปงa ฤฦกn vแป ฤฦฐแปฃc cแบฅp cรณ thแบฉm quyแปn giao nhiแปm vแปฅ quแบฃn lรฝ nhร nฦฐแปc
trรชn ฤแปa bร n quแบญn, huyแปn, thแป xรฃ, thร nh phแป thuแปc tแปnh.\nb) Phรณ Chi cแปฅc trฦฐแปng
Chi cแปฅc Thuแบฟ lร ngฦฐแปi giรบp viแปc Chi cแปฅc trฦฐแปng, chแปu trรกch nhiแปm trฦฐแปc Chi cแปฅc
trฦฐแปng vร trฦฐแปc phรกp luแบญt vแป lฤฉnh vแปฑc cรดng tรกc ฤฦฐแปฃc phรขn cรดng; thay mแบทt Chi cแปฅc
trฦฐแปng ฤiแปu hร nh, giแบฃi quyแบฟt cรกc cรดng viแปc cแปงa Chi cแปฅc khi ฤฦฐแปฃc Chi cแปฅc trฦฐแปng
แปงy quyแปn, giao nhiแปm vแปฅ.'']'
sentences:
- Nhiแปm vแปฅ cแปงa Chi cแปฅc trฦฐแปng thuแปc Cแปฅc Thuแบฟ nhฦฐ thแบฟ nร o?
- Viแปc ฤรกnh giรก chแบฅt lฦฐแปฃng dแปch vแปฅ sแปฑ nghiแปp cรดng vแป xรขy dแปฑng cฦก sแป dแปฏ liแปu ฤฦฐแปฃc
thแปฑc hiแปn theo phฦฐฦกng thแปฉc nร o?
- Khoแบฃn phแปฅ cแบฅp chuyรชn cแบงn cรณ tรญnh vร o lฦฐฦกng ฤแป tรญnh tiแปn lฦฐฦกng tฤng ca, lฦฐฦกng lร m
thรชm giแป hay khรดng?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.26527708019420726
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4377197388247112
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5174116859199732
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6099112673698309
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26527708019420726
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14590657960823708
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10348233718399463
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.060991126736983085
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26527708019420726
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4377197388247112
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5174116859199732
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6099112673698309
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4285225723707542
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.37149118785859175
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.38082252053876386
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.26586305039343716
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.43227858697471955
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5082872928176796
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6015402645236899
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26586305039343716
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1440928623249065
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1016574585635359
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06015402645236899
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26586305039343716
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.43227858697471955
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5082872928176796
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6015402645236899
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4244877080296015
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.36887667785457956
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3780890557065138
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.2483676544450025
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4107651096601373
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4801607232546459
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5700652938222
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2483676544450025
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.13692170322004574
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09603214465092917
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05700652938221999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2483676544450025
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4107651096601373
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4801607232546459
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5700652938222
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.40061709420771235
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.34734958105124125
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.35675125361493826
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.22141302528042858
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3701657458563536
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4385568391093253
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5179976561192031
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22141302528042858
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12338858195211787
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08771136782186506
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.051799765611920304
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.22141302528042858
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3701657458563536
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4385568391093253
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5179976561192031
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3619435400628976
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3128400221632284
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.32179789892986727
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.1616440649589821
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.27749874434957306
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.33433785367487023
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.4103465595178302
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1616440649589821
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09249958144985769
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.06686757073497404
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.04103465595178302
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1616440649589821
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.27749874434957306
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.33433785367487023
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4103465595178302
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.27713659801328827
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.23557945277558567
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24398402076434567
name: Cosine Map@100
---
# SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("minhdang/bge-base-financial-matryoshka_pass_2")
# Run inference
sentences = [
"['Tiรชu chuแบฉn Chi cแปฅc trฦฐแปng, Phรณ Chi cแปฅc trฦฐแปng thuแปc Cแปฅc Thuแบฟ\\n1. Vแป trรญ vร nhiแปm vแปฅ\\na) Chi cแปฅc trฦฐแปng Chi cแปฅc Thuแบฟ lร ngฦฐแปi ฤแปฉng ฤแบงu Chi cแปฅc Thuแบฟ, chแปu trรกch nhiแปm trฦฐแปc Cแปฅc trฦฐแปng Cแปฅc Thuแบฟ vร trฦฐแปc phรกp luแบญt vแป toร n bแป hoแบกt ฤแปng nhiแปm vแปฅ cแปงa ฤฦกn vแป ฤฦฐแปฃc cแบฅp cรณ thแบฉm quyแปn giao nhiแปm vแปฅ quแบฃn lรฝ nhร nฦฐแปc trรชn ฤแปa bร n quแบญn, huyแปn, thแป xรฃ, thร nh phแป thuแปc tแปnh.\\nb) Phรณ Chi cแปฅc trฦฐแปng Chi cแปฅc Thuแบฟ lร ngฦฐแปi giรบp viแปc Chi cแปฅc trฦฐแปng, chแปu trรกch nhiแปm trฦฐแปc Chi cแปฅc trฦฐแปng vร trฦฐแปc phรกp luแบญt vแป lฤฉnh vแปฑc cรดng tรกc ฤฦฐแปฃc phรขn cรดng; thay mแบทt Chi cแปฅc trฦฐแปng ฤiแปu hร nh, giแบฃi quyแบฟt cรกc cรดng viแปc cแปงa Chi cแปฅc khi ฤฦฐแปฃc Chi cแปฅc trฦฐแปng แปงy quyแปn, giao nhiแปm vแปฅ.']",
'Nhiแปm vแปฅ cแปงa Chi cแปฅc trฦฐแปng thuแปc Cแปฅc Thuแบฟ nhฦฐ thแบฟ nร o?',
'Khoแบฃn phแปฅ cแบฅp chuyรชn cแบงn cรณ tรญnh vร o lฦฐฦกng ฤแป tรญnh tiแปn lฦฐฦกng tฤng ca, lฦฐฦกng lร m thรชm giแป hay khรดng?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2653 |
| cosine_accuracy@3 | 0.4377 |
| cosine_accuracy@5 | 0.5174 |
| cosine_accuracy@10 | 0.6099 |
| cosine_precision@1 | 0.2653 |
| cosine_precision@3 | 0.1459 |
| cosine_precision@5 | 0.1035 |
| cosine_precision@10 | 0.061 |
| cosine_recall@1 | 0.2653 |
| cosine_recall@3 | 0.4377 |
| cosine_recall@5 | 0.5174 |
| cosine_recall@10 | 0.6099 |
| cosine_ndcg@10 | 0.4285 |
| cosine_mrr@10 | 0.3715 |
| **cosine_map@100** | **0.3808** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2659 |
| cosine_accuracy@3 | 0.4323 |
| cosine_accuracy@5 | 0.5083 |
| cosine_accuracy@10 | 0.6015 |
| cosine_precision@1 | 0.2659 |
| cosine_precision@3 | 0.1441 |
| cosine_precision@5 | 0.1017 |
| cosine_precision@10 | 0.0602 |
| cosine_recall@1 | 0.2659 |
| cosine_recall@3 | 0.4323 |
| cosine_recall@5 | 0.5083 |
| cosine_recall@10 | 0.6015 |
| cosine_ndcg@10 | 0.4245 |
| cosine_mrr@10 | 0.3689 |
| **cosine_map@100** | **0.3781** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2484 |
| cosine_accuracy@3 | 0.4108 |
| cosine_accuracy@5 | 0.4802 |
| cosine_accuracy@10 | 0.5701 |
| cosine_precision@1 | 0.2484 |
| cosine_precision@3 | 0.1369 |
| cosine_precision@5 | 0.096 |
| cosine_precision@10 | 0.057 |
| cosine_recall@1 | 0.2484 |
| cosine_recall@3 | 0.4108 |
| cosine_recall@5 | 0.4802 |
| cosine_recall@10 | 0.5701 |
| cosine_ndcg@10 | 0.4006 |
| cosine_mrr@10 | 0.3473 |
| **cosine_map@100** | **0.3568** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2214 |
| cosine_accuracy@3 | 0.3702 |
| cosine_accuracy@5 | 0.4386 |
| cosine_accuracy@10 | 0.518 |
| cosine_precision@1 | 0.2214 |
| cosine_precision@3 | 0.1234 |
| cosine_precision@5 | 0.0877 |
| cosine_precision@10 | 0.0518 |
| cosine_recall@1 | 0.2214 |
| cosine_recall@3 | 0.3702 |
| cosine_recall@5 | 0.4386 |
| cosine_recall@10 | 0.518 |
| cosine_ndcg@10 | 0.3619 |
| cosine_mrr@10 | 0.3128 |
| **cosine_map@100** | **0.3218** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.1616 |
| cosine_accuracy@3 | 0.2775 |
| cosine_accuracy@5 | 0.3343 |
| cosine_accuracy@10 | 0.4103 |
| cosine_precision@1 | 0.1616 |
| cosine_precision@3 | 0.0925 |
| cosine_precision@5 | 0.0669 |
| cosine_precision@10 | 0.041 |
| cosine_recall@1 | 0.1616 |
| cosine_recall@3 | 0.2775 |
| cosine_recall@5 | 0.3343 |
| cosine_recall@10 | 0.4103 |
| cosine_ndcg@10 | 0.2771 |
| cosine_mrr@10 | 0.2356 |
| **cosine_map@100** | **0.244** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 107,510 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 209.22 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 25.12 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|
| <code>['ฤiแปu kiแปn thแปฑc hiแปn cรกc quyแปn chuyแปn ฤแปi, chuyแปn nhฦฐแปฃng, cho thuรช, cho thuรช lแบกi, thแปซa kแบฟ, tแบทng cho, thแบฟ chแบฅp quyแปn sแปญ dแปฅng ฤแบฅt; gรณp vแปn bแบฑng quyแปn sแปญ dแปฅng ฤแบฅt\n1. Ngฦฐแปi sแปญ dแปฅng ฤแบฅt ฤฦฐแปฃc thแปฑc hiแปn cรกc quyแปn chuyแปn ฤแปi, chuyแปn nhฦฐแปฃng, cho thuรช, cho thuรช lแบกi, thแปซa kแบฟ, tแบทng cho, thแบฟ chแบฅp quyแปn sแปญ dแปฅng ฤแบฅt; gรณp vแปn bแบฑng quyแปn sแปญ dแปฅng ฤแบฅt khi cรณ cรกc ฤiแปu kiแปn sau ฤรขy:\na) Cรณ Giแบฅy chแปฉng nhแบญn, trแปซ trฦฐแปng hแปฃp quy ฤแปnh tแบกi khoแบฃn 3 ฤiแปu 186 vร trฦฐแปng hแปฃp nhแบญn thแปซa kแบฟ quy ฤแปnh tแบกi khoแบฃn 1 ฤiแปu 168 cแปงa Luแบญt nร y;\nb) ฤแบฅt khรดng cรณ tranh chแบฅp;\nc) Quyแปn sแปญ dแปฅng ฤแบฅt khรดng bแป kรช biรชn ฤแป bแบฃo ฤแบฃm thi hร nh รกn;\nd) Trong thแปi hแบกn sแปญ dแปฅng ฤแบฅt.\n...']</code> | <code>ฤแป tแบทng cho quyแปn sแปญ dแปฅng ฤแบฅt thรฌ ngฦฐแปi sแปญ dแปฅng ฤแบฅt phแบฃi ฤแบฃm bแบฃo ฤฦฐแปฃc nhแปฏng ฤiแปu kiแปn nร o?</code> |
| <code>['Vแปn hoแบกt ฤแปng cแปงa hแปฃp tรกc xรฃ\n1. Vแปn hoแบกt ฤแปng cแปงa hแปฃp tรกc xรฃ, liรชn hiแปp hแปฃp tรกc xรฃ gแปm vแปn gรณp cแปงa thร nh viรชn, hแปฃp tรกc xรฃ thร nh viรชn, vแปn huy ฤแปng, vแปn tรญch lลฉy, cรกc quแปน cแปงa hแปฃp tรกc xรฃ, liรชn hiแปp hแปฃp tรกc xรฃ; cรกc khoแบฃn trแปฃ cแบฅp, hแป trแปฃ cแปงa Nhร nฦฐแปc, cแปงa cรกc tแป chแปฉc, cรก nhรขn trong nฦฐแปc vร nฦฐแปc ngoร i; cรกc khoแบฃn ฤฦฐแปฃc tแบทng, cho vร cรกc nguแปn thu hแปฃp phรกp khรกc.\n2. ฤiแปu lแป, quy chแบฟ quแบฃn lรฝ tร i chรญnh cแปงa hแปฃp tรกc xรฃ, liรชn hiแปp hแปฃp tรกc xรฃ quy ฤแปnh cแปฅ thแป viแปc quแบฃn lรฝ, sแปญ dแปฅng vแปn hoแบกt ฤแปng cแปงa hแปฃp tรกc xรฃ, liรชn hiแปp hแปฃp tรกc xรฃ phรน hแปฃp vแปi quy ฤแปnh cแปงa Luแบญt Hแปฃp tรกc xรฃ vร quy ฤแปnh cแปงa phรกp luแบญt cรณ liรชn quan.']</code> | <code>Vแปn hoแบกt ฤแปng cแปงa hแปฃp tรกc xรฃ bao gแปm nhแปฏng nguแปn nร o?</code> |
| <code>['Vแป kแปน nฤng\n- Sแปญ dแปฅng ฤฦฐแปฃc cรดng nghรชฬฃ thรดng tin cฦก bแบฃn theo quy ฤแปnh;\n- Xรกc ฤแปnh ฤฦฐแปฃc yรชu cแบงu cแปงa hรชฬฃ thแปng cฦก sแป dแปฏ liรชฬฃu;\n- Cร i ฤแบทt thร nh thแบกo phแบงn mรชฬm quแบฃn trแป cฦก sแป dแปฏ liรชฬฃu;\n- Khai thรกc hiรชฬฃu suแบฅt cao hรชฬฃ thแปng cฦก sแป dแปฏ liรชฬฃu;\n- Quแบฃn lรฝ an toร n hรชฬฃ thแปng cฦก sแป dแปฏ liรชฬฃu;\n- Bแบฃo trรฌ ฤฦฐแปฃc hรชฬฃ thแปng;\n- Bแบฃo mแบญt ฤฦฐแปฃc hรชฬฃ thแปng cฦก sแป dแปฏ liรชฬฃu;\n- Nรขng cแบฅp ฤฦฐแปฃc hรชฬฃ thแปng cฦก sแป dแปฏ liรชฬฃu;\n- Xรขy dฦฐฬฃng ฤฦฐแปฃc แปฉng dแปฅng;\n- Tรญch hแปฃp ฤฦฐแปฃc cรกc hรชฬฃ thแปng cฦก sแป dแปฏ liรชฬฃu;\n- Bแบฃo trรฌ, sแปญa chแปฏa, nรขng cแบฅp ฤฦฐแปฃc phแบงn mรชฬm vร phแบงn cแปฉng cแปงa hรชฬฃ thแปng mแบกng;\n- Xรขy dฦฐฬฃng ฤฦฐแปฃc cรกc แปฉng dแปฅng ฤฦกn giแบฃn trรชn hรชฬฃ thแปng mแบกng;\n- Ghi ฤฦฐแปฃc nhแบญt kรฝ cลฉng nhฦฐ bรกo cรกo cรดng viรชฬฃc, tiแบฟn ฤแป cรดng viรชฬฃc;\n- Thฦฐฬฃc hiรชฬฃn ฤฦฐแปฃc cรกc biรชฬฃn phรกp vรชฬฃ sinh cรดng nghiรชฬฃp, an toร n lao ฤแปng;\n- Giao tiแบฟp hiรชฬฃu quแบฃ thรดng qua viแบฟt, thuyแบฟt trรฌnh, thแบฃo luแบญn, ฤร m phรกn, lร m chแปง tรฌnh huแปng;\n- Giรกm sรกt hรชฬฃ thแปng cรดng nghรชฬฃ thรดng tin vแปซa vร nhแป;\n- Sแปญ dแปฅng ฤฦฐแปฃc cรดng nghรชฬฃ thรดng tin cฦก bแบฃn theo quy ฤแปnh; แปฉng dแปฅng cรดng nghรชฬฃ thรดng tin trong mแปt sแป cรดng viรชฬฃc chuyรชn mรดn cแปงa ngร nh, nghรชฬ;\n- Sแปญ dแปฅng ฤฦฐแปฃc ngoแบกi ngแปฏ cฦก bแบฃn, ฤแบกt bแบญc 1/6 trong Khung nฤng lฦฐฬฃc ngoแบกi ngแปฏ cแปงa Viรชฬฃt Nam; แปฉng dแปฅng ฤฦฐแปฃc ngoแบกi ngแปฏ vร o mแปt sแป cรดng viรชฬฃc chuyรชn mรดn cแปงa ngร nh, nghรชฬ.']</code> | <code>Ngฦฐแปi hแปc ngร nh quแบฃn trแป cฦก sแป dแปฏ liแปu trรฌnh ฤแป trung cแบฅp sau khi tแปt nghiแปp phแบฃi cรณ kแปน nฤng ngoแบกi ngแปฏ nhฦฐ thแบฟ nร o?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 11,946 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 31 tokens</li><li>mean: 210.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 24.98 tokens</li><li>max: 64 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>['Miแป
n nhiแปm, cรกch chแปฉc Trฦฐแปng ban kiแปm soรกt, Kiแปm soรกt viรชn\n1. Trฦฐแปng ban kiแปm soรกt, Kiแปm soรกt viรชn bแป miแป
n nhiแปm trong cรกc trฦฐแปng hแปฃp sau ฤรขy:\na) Khรดng cรฒn ฤแปง tiรชu chuแบฉn vร ฤiแปu kiแปn theo quy ฤแปnh tแบกi ฤiแปu 23 cแปงa ฤiแปu lแป nร y;\nb) Cรณ ฤฦกn xin tแปซ chแปฉc vร ฤฦฐแปฃc cฦก quan ฤแบกi diแปn chแปง sแป hแปฏu chแบฅp thuแบญn;\nc) ฤฦฐแปฃc cฦก quan ฤแบกi diแปn chแปง sแป hแปฏu hoแบทc cฦก quan cรณ thแบฉm quyแปn khรกc ฤiแปu ฤแปng, phรขn cรดng thแปฑc hiแปn nhiแปm vแปฅ khรกc;\nd) Trฦฐแปng hแปฃp khรกc theo quy ฤแปnh cแปงa phรกp luแบญt.\n...']</code> | <code>Viแปc miแป
n nhiแปm Trฦฐแปng Ban kiแปm soรกt Tแปng cรดng ty Giแบฅy Viแปt Nam ฤฦฐแปฃc thแปฑc hiแปn khi nร o?</code> |
| <code>['Cแบฅp giแบฅy phรฉp hoแบกt ฤแปng tฦฐ vแบฅn chuyรชn ngร nh ฤiแปn thuแปc thแบฉm quyแปn cแบฅp cแปงa ฤแปa phฦฐฦกng\n...\nc) Thร nh phแบงn hแป sฦก:\n- Vฤn bแบฃn ฤแป nghแป cแบฅp giแบฅy phรฉp hoแบกt ฤแปng ฤiแปn lแปฑc theo Mแบซu 01 quy ฤแปnh tแบกi Phแปฅ lแปฅc ban hร nh kรจm theo Thรดng tฦฐ sแป 21/2020/TT-BCT .\n- Bแบฃn sao Giแบฅy chแปฉng nhแบญn ฤฤng kรฝ doanh nghiแปp hoแบทc Quyแบฟt ฤแปnh thร nh lแบญp, Giแบฅy chแปฉng nhแบญn thร nh lแบญp (ฤแปi vแปi cรกc tแป chแปฉc khรดng cรณ Giแบฅy chแปฉng nhแบญn ฤฤng kรฝ doanh nghiแปp) cแปงa tแป chแปฉc ฤแป nghแป cแบฅp giแบฅy phรฉp.\n- Danh sรกch trรญch ngang chuyรชn gia tฦฐ vแบฅn ฤแบฃm nhiแปm chแปฉc danh chแปง nhiแปm, chแปฉc danh giรกm sรกt trฦฐแปng vร cรกc chuyรชn gia tฦฐ vแบฅn khรกc theo Mแบซu 3a quy ฤแปnh tแบกi Phแปฅ lแปฅc ban hร nh kรจm theo Thรดng tฦฐ sแป 21/2020/TT-BCT ; bแบฃn sao bแบฑng tแปt nghiแปp ฤแบกi hแปc trแป lรชn, chแปฉng chแป hร nh nghแป hoแบกt ฤแปng xรขy dแปฑng, hแปฃp ฤแปng lao ฤแปng xรกc ฤแปnh thแปi hแบกn hoแบทc khรดng xรกc ฤแปnh thแปi hแบกn cแปงa cรกc chuyรชn gia tฦฐ vแบฅn.\n- Tร i liแปu chแปฉng minh kinh nghiแปm cแปงa cรกc chuyรชn gia tฦฐ vแบฅn (Quyแบฟt ฤแปnh phรขn cรดng nhiแปm vแปฅ, giแบฅy xรกc nhแบญn cแปงa cรกc ฤฦกn vแป cรณ dแปฑ รกn mร chuyรชn gia ฤรฃ thแปฑc hiแปn hoแบทc cรกc tร i liแปu cรณ giรก trแป tฦฐฦกng ฤฦฐฦกng).\n...']</code> | <code>Cแบงn chuแบฉn bแป nhแปฏng giแบฅy tแป gรฌ ฤแป thแปฑc hiแปn thแปง tแปฅc cแบฅp giแบฅy phรฉp hoแบกt ฤแปng tฦฐ vแบฅn thiแบฟt kแบฟ cรดng trรฌnh ฤฦฐแปng dรขy vร trแบกm biแบฟn รกp cรณ cแบฅp ฤiแปn รกp ฤแบฟn 35kV?</code> |
| <code>['ฤiแปu 41. Tแบกm hoรฃn gแปi nhแบญp ngลฉ vร miแป
n gแปi nhแบญp ngลฉ\n1. Tแบกm hoรฃn gแปi nhแบญp ngลฉ ฤแปi vแปi nhแปฏng cรดng dรขn sau ฤรขy:\na) Chฦฐa ฤแปง sแปฉc khแปe phแปฅc vแปฅ tแบกi ngลฉ theo kแบฟt luแบญn cแปงa Hแปi ฤแปng khรกm sแปฉc khแปe;\nb) Lร lao ฤแปng duy nhแบฅt phแบฃi trแปฑc tiแบฟp nuรดi dฦฐแปกng thรขn nhรขn khรดng cรฒn khแบฃ nฤng lao ฤแปng hoแบทc chฦฐa ฤแบฟn tuแปi lao ฤแปng; trong gia ฤรฌnh bแป thiแปt hแบกi nแบทng vแป ngฦฐแปi vร tร i sแบฃn do tai nแบกn, thiรชn tai, dแปch bแปnh nguy hiแปm gรขy ra ฤฦฐแปฃc แปฆy ban nhรขn dรขn cแบฅp xรฃ xรกc nhแบญn;\nc) Mแปt con cแปงa bแปnh binh, ngฦฐแปi nhiแป
m chแบฅt ฤแปc da cam suy giแบฃm khแบฃ nฤng lao ฤแปng tแปซ 61% ฤแบฟn 80%;\nd) Cรณ anh, chแป hoแบทc em ruแปt lร hแบก sฤฉ quan, binh sฤฉ ฤang phแปฅc vแปฅ tแบกi ngลฉ; hแบก sฤฉ quan, chiแบฟn sฤฉ thแปฑc hiแปn nghฤฉa vแปฅ tham gia Cรดng an nhรขn dรขn;\nฤ) Ngฦฐแปi thuแปc diแปn di dรขn, giรฃn dรขn trong 03 nฤm ฤแบงu ฤแบฟn cรกc xรฃ ฤแบทc biแปt khรณ khฤn theo dแปฑ รกn phรกt triแปn kinh tแบฟ - xรฃ hแปi cแปงa Nhร nฦฐแปc do แปฆy ban nhรขn dรขn cแบฅp tแปnh trแป lรชn quyแบฟt ฤแปnh;\ne) Cรกn bแป, cรดng chแปฉc, viรชn chแปฉc, thanh niรชn xung phong ฤฦฐแปฃc ฤiแปu ฤแปng ฤแบฟn cรดng tรกc, lร m viแปc แป vรนng cรณ ฤiแปu kiแปn kinh tแบฟ - xรฃ hแปi ฤแบทc biแปt khรณ khฤn theo quy ฤแปnh cแปงa phรกp luแบญt;\ng) ฤang hแปc tแบกi cฦก sแป giรกo dแปฅc phแป thรดng; ฤang ฤฦฐแปฃc ฤร o tแบกo trรฌnh ฤแป ฤแบกi hแปc hแป chรญnh quy thuแปc cฦก sแป giรกo dแปฅc ฤแบกi hแปc, trรฌnh ฤแป cao ฤแบณng hแป chรญnh quy thuแปc cฦก sแป giรกo dแปฅc nghแป nghiแปp trong thแปi gian mแปt khรณa ฤร o tแบกo cแปงa mแปt trรฌnh ฤแป ฤร o tแบกo.\nh) Dรขn quรขn thฦฐแปng trแปฑc.\n2. Miแป
n gแปi nhแบญp ngลฉ ฤแปi vแปi nhแปฏng cรดng dรขn sau ฤรขy:\na) Con cแปงa liแปt sฤฉ, con cแปงa thฦฐฦกng binh hแบกng mแปt;\nb) Mแปt anh hoแบทc mแปt em trai cแปงa liแปt sฤฉ;\nc) Mแปt con cแปงa thฦฐฦกng binh hแบกng hai; mแปt con cแปงa bแปnh binh suy giแบฃm khแบฃ nฤng lao ฤแปng tแปซ 81% trแป lรชn; mแปt con cแปงa ngฦฐแปi nhiแป
m chแบฅt ฤแปc da cam suy giแบฃm khแบฃ nฤng lao ฤแปng tแปซ 81 % trแป lรชn;\nd) Ngฦฐแปi lร m cรดng tรกc cฦก yแบฟu khรดng phแบฃi lร quรขn nhรขn, Cรดng an nhรขn dรขn;\nฤ) Cรกn bแป, cรดng chแปฉc, viรชn chแปฉc, thanh niรชn xung phong ฤฦฐแปฃc ฤiแปu ฤแปng ฤแบฟn cรดng tรกc, lร m viแปc แป vรนng cรณ ฤiแปu kiแปn kinh tแบฟ - xรฃ hแปi ฤแบทc biแปt khรณ khฤn theo quy ฤแปnh cแปงa phรกp luแบญt tแปซ 24 thรกng trแป lรชn.\n3. Cรดng dรขn thuแปc diแปn tแบกm hoรฃn gแปi nhแบญp ngลฉ quy ฤแปnh tแบกi khoแบฃn 1 ฤiแปu nร y, nแบฟu khรดng cรฒn lรฝ do tแบกm hoรฃn thรฌ ฤฦฐแปฃc gแปi nhแบญp ngลฉ.\nCรดng dรขn thuแปc diแปn ฤฦฐแปฃc tแบกm hoรฃn gแปi nhแบญp ngลฉ hoแบทc ฤฦฐแปฃc miแป
n gแปi nhแบญp ngลฉ quy ฤแปnh tแบกi khoแบฃn 1 vร khoแบฃn 2 ฤiแปu nร y, nแบฟu tรฌnh nguyแปn thรฌ ฤฦฐแปฃc xem xรฉt tuyแปn chแปn vร gแปi nhแบญp ngลฉ.\n4. Danh sรกch cรดng dรขn thuแปc diแปn ฤฦฐแปฃc tแบกm hoรฃn gแปi nhแบญp ngลฉ, ฤฦฐแปฃc miแป
n gแปi nhแบญp ngลฉ phแบฃi ฤฦฐแปฃc niรชm yแบฟt cรดng khai tแบกi trแปฅ sแป แปฆy ban nhรขn dรขn cแบฅp xรฃ, cฦก quan, tแป chแปฉc trong thแปi hแบกn 20 ngร y.']</code> | <code>Liรชn quan ฤแบฟn tแบกm hoรฃn nghฤฉa vแปฅ quรขn sแปฑ ฤฦฐแปฃc phรกp luแบญt quy ฤแปnh nhฦฐ thแบฟ nร o?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:------:|:----:|:-------------:|:------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.0952 | 10 | 2.1759 | - | - | - | - | - | - |
| 0.1905 | 20 | 1.4526 | - | - | - | - | - | - |
| 0.2857 | 30 | 1.4855 | - | - | - | - | - | - |
| 0.3810 | 40 | 1.5256 | - | - | - | - | - | - |
| 0.4762 | 50 | 1.6203 | - | - | - | - | - | - |
| 0.5714 | 60 | 1.6302 | - | - | - | - | - | - |
| 0.6667 | 70 | 1.8354 | - | - | - | - | - | - |
| 0.7619 | 80 | 1.4928 | - | - | - | - | - | - |
| 0.8571 | 90 | 1.6114 | - | - | - | - | - | - |
| 0.9524 | 100 | 1.5655 | - | - | - | - | - | - |
| 1.0 | 105 | - | 1.4307 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 |
| 1.0476 | 110 | 1.4171 | - | - | - | - | - | - |
| 1.1429 | 120 | 1.572 | - | - | - | - | - | - |
| 1.2381 | 130 | 1.3337 | - | - | - | - | - | - |
| 1.3333 | 140 | 1.2587 | - | - | - | - | - | - |
| 1.4286 | 150 | 1.3038 | - | - | - | - | - | - |
| 1.5238 | 160 | 1.5032 | - | - | - | - | - | - |
| 1.6190 | 170 | 1.1601 | - | - | - | - | - | - |
| 1.7143 | 180 | 1.2226 | - | - | - | - | - | - |
| 1.8095 | 190 | 1.1545 | - | - | - | - | - | - |
| 1.9048 | 200 | 1.2034 | - | - | - | - | - | - |
| 2.0 | 210 | 1.0695 | 1.1034 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 |
| 2.0952 | 220 | 1.0259 | - | - | - | - | - | - |
| 2.1905 | 230 | 0.8647 | - | - | - | - | - | - |
| 2.2857 | 240 | 0.901 | - | - | - | - | - | - |
| 2.3810 | 250 | 0.9261 | - | - | - | - | - | - |
| 2.4762 | 260 | 0.8719 | - | - | - | - | - | - |
| 2.5714 | 270 | 0.8008 | - | - | - | - | - | - |
| 2.6667 | 280 | 0.7091 | - | - | - | - | - | - |
| 2.7619 | 290 | 0.6592 | - | - | - | - | - | - |
| 2.8571 | 300 | 0.69 | - | - | - | - | - | - |
| 2.9524 | 310 | 0.739 | - | - | - | - | - | - |
| 3.0 | 315 | - | 0.8128 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 |
| 3.0476 | 320 | 0.6944 | - | - | - | - | - | - |
| 3.1429 | 330 | 0.6414 | - | - | - | - | - | - |
| 3.2381 | 340 | 0.5874 | - | - | - | - | - | - |
| 3.3333 | 350 | 0.5979 | - | - | - | - | - | - |
| 3.4286 | 360 | 0.5409 | - | - | - | - | - | - |
| 3.5238 | 370 | 0.576 | - | - | - | - | - | - |
| 3.6190 | 380 | 0.5371 | - | - | - | - | - | - |
| 3.7143 | 390 | 0.5107 | - | - | - | - | - | - |
| 3.8095 | 400 | 0.4904 | - | - | - | - | - | - |
| 3.9048 | 410 | 0.5444 | - | - | - | - | - | - |
| 4.0 | 420 | 0.5389 | - | - | - | - | - | - |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.0.1
- Datasets: 2.19.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
abhishkgoel/gita-text-generation-gpt2 | abhishkgoel | 2024-11-01T08:06:12Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T08:05:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
csb05/whisper-small-RESEARCH | csb05 | 2024-11-01T07:50:27Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"tl",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-05T07:47:38Z | ---
library_name: transformers
language:
- tl
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper small tl - CSB05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper small tl - CSB05
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8685
- Wer: 24.4015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0158 | 8.9286 | 1000 | 0.6826 | 24.1285 |
| 0.0019 | 17.8571 | 2000 | 0.7977 | 24.7795 |
| 0.0003 | 26.7857 | 3000 | 0.8517 | 24.4645 |
| 0.0002 | 35.7143 | 4000 | 0.8685 | 24.4015 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
Givemeaname123/idontlikethissubnet | Givemeaname123 | 2024-11-01T07:48:24Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T07:39:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Natthaphon/thaicapgen-clip-phayathai | Natthaphon | 2024-11-01T07:41:52Z | 16 | 0 | null | [
"safetensors",
"clip-encoder-decoder",
"image-to-text",
"image-captioning",
"custom_code",
"th",
"region:us"
] | image-to-text | 2024-11-01T04:22:32Z | ---
tags:
- image-to-text
- image-captioning
language:
- th
---
# Thai Image Captioning
Encoder-decoder style image captioning model using [CLIP encoder](https://huggingface.co/openai/clip-vit-base-patch32) and [PhayathaiBert](https://huggingface.co/clicknext/phayathaibert). Trained on Thai language MSCOCO and IPU24 dataset.
# Usage
Use `AutoModel` to load it. Requires `trust_remote_code=True`.
```python
from transformers import AutoModel, AutoImageProcessor, AutoTokenizer
device = 'cuda'
gen_kwargs = {"max_length": 120, "num_beams": 4}
model_path = 'Natthaphon/thaicapgen-clip-gpt2'
feature_extractor = AutoImageProcessor.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
# Acknowledgement
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107] |
mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF | mradermacher | 2024-11-01T07:40:09Z | 46 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct",
"base_model:quantized:DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-01T06:52:34Z | ---
base_model: DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 4.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 7.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 8.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 10.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt7-RCM-Into-Darkness-18.5B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 15.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kayfour/kayfour-Qwen2.5-7B-Instruct-testv1 | kayfour | 2024-11-01T07:39:37Z | 2,099 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2407.10671",
"license:apache-2.0",
"region:us"
] | null | 2024-11-01T07:12:32Z | ---
license: apache-2.0
---
Same as original model
Qwen2.5-7B-Instruct
Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
Long-context Support up to 128K tokens and can generate up to 8K tokens.
Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
This repo contains the instruction-tuned 7B Qwen2.5 model, which has the following features:
Type: Causal Language Models
Training Stage: Pretraining & Post-training
Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
Number of Parameters: 7.61B
Number of Paramaters (Non-Embedding): 6.53B
Number of Layers: 28
Number of Attention Heads (GQA): 28 for Q and 4 for KV
Context Length: Full 131,072 tokens and generation 8192 tokens
Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our blog, GitHub, and Documentation.
Requirements
The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Processing Long Texts
The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to config.json to enable YaRN:
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.
Evaluation & Performance
Detailed evaluation results are reported in this ๐ blog.
For requirements on GPU memory and the respective throughput, see results here.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
} |
mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF | mradermacher | 2024-11-01T07:36:11Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:theprint/ReWiz-Nemo-12B-Instruct",
"base_model:quantized:theprint/ReWiz-Nemo-12B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T05:43:18Z | ---
base_model: theprint/ReWiz-Nemo-12B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/theprint/ReWiz-Nemo-12B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/ReWiz-Nemo-12B-Instruct-GGUF | mradermacher | 2024-11-01T07:36:11Z | 14 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:theprint/ReWiz-Nemo-12B-Instruct",
"base_model:quantized:theprint/ReWiz-Nemo-12B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T04:31:39Z | ---
base_model: theprint/ReWiz-Nemo-12B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/theprint/ReWiz-Nemo-12B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ariffiq99/Randomized_Roberta_Stacked_model_40 | Ariffiq99 | 2024-11-01T07:35:38Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-01T06:29:27Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Randomized_Roberta_Stacked_model_40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Randomized_Roberta_Stacked_model_40
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8708
- F1: 0.7063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8494 | 1.0 | 631 | 0.8404 | 0.6905 |
| 0.7618 | 2.0 | 1262 | 0.8238 | 0.7011 |
| 0.6957 | 3.0 | 1893 | 0.8400 | 0.7040 |
| 0.6037 | 4.0 | 2524 | 0.8514 | 0.7080 |
| 0.5634 | 5.0 | 3155 | 0.8708 | 0.7063 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Merdeka-LLM/merdeka-llm-hr-3b-128k-instruct | Merdeka-LLM | 2024-11-01T07:32:45Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-29T12:04:21Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Merdeka-LLM
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saphvis/LLaVA_MORE-llama_3_1-8B-finetuning-FP16-mmproj-GGUF | saphvis | 2024-11-01T07:20:36Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"image-text-to-text",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-01T07:14:31Z | ---
library_name: transformers
license: apache-2.0
datasets:
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-text-to-text
---
FP16 GGUF of the LLaVa_MORE 3.1 8B finetuning mmproj
Original Model Card:
# Model Card: LLaVA_MORE-llama_3_1-8B-finetuning
```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B.
For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.
## Inference
You can try our LLaVA-MORE in the Image-To-Text task by cloning our repository and running the following script.
```bash
python -u llava/eval/run_llava.py
```
## Citation
If you make use of our work, please cite our repo:
```bibtex
@misc{cocchi2024llavamore,
title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
url={https://github.com/aimagelab/LLaVA-MORE},
year={2024}
}
``` |
Nekodigi/rose | Nekodigi | 2024-11-01T07:09:53Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-30T01:03:46Z | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of rose
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Nekodigi/rose
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of rose using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Antihero29/MeganLoraFlux | Antihero29 | 2024-11-01T07:08:16Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-11-01T07:04:56Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/80719079-ca0a-4388-b72f-2aa03924a365.png
- text: '-'
output:
url: images/5591425c-d6c2-4acc-b4fb-c4675b27a5b8.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: creativeml-openrail-m
---
# Megan Loras
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Antihero29/MeganLoraFlux/tree/main) them in the Files & versions tab.
|
quantilence/donut-demo | quantilence | 2024-11-01T06:56:18Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-01T04:53:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JSWOOK/finetuning_model | JSWOOK | 2024-11-01T06:50:01Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-31T08:01:09Z | ---
library_name: transformers
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
model-index:
- name: finetuning_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning_model
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf | RichardErkhov | 2024-11-01T06:41:44Z | 6 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-11-01T03:14:59Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-koen-story-13b - GGUF
- Model creator: https://huggingface.co/squarelike/
- Original model: https://huggingface.co/squarelike/llama-2-koen-story-13b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-2-koen-story-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q2_K.gguf) | Q2_K | 4.6GB |
| [llama-2-koen-story-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K_S.gguf) | Q3_K_S | 5.36GB |
| [llama-2-koen-story-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K.gguf) | Q3_K | 5.99GB |
| [llama-2-koen-story-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K_M.gguf) | Q3_K_M | 5.99GB |
| [llama-2-koen-story-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q3_K_L.gguf) | Q3_K_L | 6.54GB |
| [llama-2-koen-story-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.IQ4_XS.gguf) | IQ4_XS | 6.63GB |
| [llama-2-koen-story-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_0.gguf) | Q4_0 | 6.95GB |
| [llama-2-koen-story-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.IQ4_NL.gguf) | IQ4_NL | 6.49GB |
| [llama-2-koen-story-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_K_S.gguf) | Q4_K_S | 7.01GB |
| [llama-2-koen-story-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_K.gguf) | Q4_K | 2.77GB |
| [llama-2-koen-story-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_K_M.gguf) | Q4_K_M | 4.13GB |
| [llama-2-koen-story-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q4_1.gguf) | Q4_1 | 7.71GB |
| [llama-2-koen-story-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_0.gguf) | Q5_0 | 5.79GB |
| [llama-2-koen-story-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_K_S.gguf) | Q5_K_S | 3.59GB |
| [llama-2-koen-story-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_K.gguf) | Q5_K | 2.03GB |
| [llama-2-koen-story-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_K_M.gguf) | Q5_K_M | 5.49GB |
| [llama-2-koen-story-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q5_1.gguf) | Q5_1 | 9.21GB |
| [llama-2-koen-story-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q6_K.gguf) | Q6_K | 10.06GB |
| [llama-2-koen-story-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_llama-2-koen-story-13b-gguf/blob/main/llama-2-koen-story-13b.Q8_0.gguf) | Q8_0 | 13.03GB |
Original model description:
---
language:
- ko
tags:
- pytorch
- causal-lm
license: llama2
pipeline_tag: text-generation
---
# llama-2-ko-story-7b
llama-2-koen-story-13b๋ [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๊ธ ์์ค raw ๋ฐ์ดํฐ๋ฅผ ํ์ต์ํจ ๊ธฐ๋ฐ ๋ชจ๋ธ์
๋๋ค.
## ํ์ต ๋ฐ์ดํฐ
llama-2-koen-story-13b๋ ์ฝ 167MB์ ํ๊ธ ์์ค ๋ง๋ญ์น๋ก ํ์ต๋์์ต๋๋ค. ์ฃผ์ ๋ฐ์ดํฐ์
์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
| Source |Size (MB) | Link |
|----------------------------------|---------|------------------------------------------|
| ํ๊ธ ์์ค ๋ง๋ญ์น | 115.0 | |
| ๊ณต์ ๋ง๋น ํ๊ตญ ๊ณ ์ ๋ฌธํ ๋ง๋ญ์น | 53.0 | https://gongu.copyright.or.kr/ |
## ํ์ต
llama-2-koen-story-13b๋ [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)์์ qlora๋ก ์ถ๊ฐ ํ์ต๋์์ต๋๋ค.
- lora_alpha: 16
- lora_dropout: 0.05
- lora_r: 32
- target_modules: q_proj, v_proj
- epoch: 3
- learning_rate: 3e-4
|
featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF | featherless-ai-quants | 2024-11-01T06:39:48Z | 17 | 0 | null | [
"gguf",
"text-generation",
"base_model:failspy/Llama-3-8B-Instruct-MopeyMule",
"base_model:quantized:failspy/Llama-3-8B-Instruct-MopeyMule",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T06:31:14Z | ---
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# failspy/Llama-3-8B-Instruct-MopeyMule GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [failspy-Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [failspy-Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [failspy-Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [failspy-Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf) | 4276.62 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
annutest/somethinglikedonut | annutest | 2024-11-01T06:35:48Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-29T09:45:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Srilalitha/gpt2-tv-caption | Srilalitha | 2024-11-01T06:34:39Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-30T10:39:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BlueOceanAcademy/Llama-3.1-8B-bnb-4bit-python-FT | BlueOceanAcademy | 2024-11-01T06:34:13Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-05T23:42:05Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** BlueOceanAcademy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Styxxxx/llama2_7b_lora-wnli | Styxxxx | 2024-11-01T06:31:31Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:31:21Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-wmt16_translate_roen | Styxxxx | 2024-11-01T06:29:46Z | 7 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:29:39Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Ariffiq99/Randomized_Roberta_Stacked_model_20 | Ariffiq99 | 2024-11-01T06:29:26Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-01T05:51:44Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Randomized_Roberta_Stacked_model_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Randomized_Roberta_Stacked_model_20
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9094
- F1: 0.6756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 316 | 1.0130 | 0.6156 |
| 1.1549 | 2.0 | 632 | 0.9246 | 0.6597 |
| 1.1549 | 3.0 | 948 | 0.9153 | 0.6697 |
| 0.8702 | 4.0 | 1264 | 0.9125 | 0.6720 |
| 0.7606 | 5.0 | 1580 | 0.9094 | 0.6756 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Styxxxx/llama2_7b_lora-wmt16_translate_fien | Styxxxx | 2024-11-01T06:29:13Z | 12 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:29:03Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-wmt16_translate_deen | Styxxxx | 2024-11-01T06:28:37Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:28:29Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-sst2 | Styxxxx | 2024-11-01T06:21:24Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:21:17Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M | sjkwon | 2024-11-01T06:16:01Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2024-11-01T06:13:46Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="sjkwon//tmp/tmpksu8y3fu/sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("sjkwon//tmp/tmpksu8y3fu/sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M")
model = AutoModelForCausalLMWithValueHead.from_pretrained("sjkwon//tmp/tmpksu8y3fu/sjkwon/1e-5_2000_sft-mdo-diverse-train-nllb-200-600M")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Styxxxx/llama2_7b_lora-piqa | Styxxxx | 2024-11-01T06:15:43Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:15:36Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-glue_qqp | Styxxxx | 2024-11-01T06:08:18Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T05:30:16Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-dart | Styxxxx | 2024-11-01T06:04:56Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T05:22:16Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-cola | Styxxxx | 2024-11-01T06:01:50Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T05:22:12Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Styxxxx/llama2_7b_lora-cb | Styxxxx | 2024-11-01T06:00:53Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T05:22:10Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
- PEFT 0.7.2.dev0 |
Styxxxx/llama2_7b_lora-anli_r2 | Styxxxx | 2024-11-01T05:57:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T05:17:23Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
yaswanthraj/gita-text-generation-gpt2 | yaswanthraj | 2024-11-01T05:55:19Z | 146 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T05:54:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF | mradermacher | 2024-11-01T05:47:03Z | 168 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"base_model:quantized:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T04:18:02Z | ---
base_model: cognitivecomputations/dolphin-2.7-mixtral-8x7b
datasets:
- cognitivecomputations/dolphin
- jondurbin/airoboros-2.2.1
- cognitivecomputations/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.7-mixtral-8x7b-i1-GGUF/resolve/main/dolphin-2.7-mixtral-8x7b.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Givemeaname123/nomoney_79 | Givemeaname123 | 2024-11-01T05:45:43Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T05:42:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jonathanjordan21/test-qwen-summary | jonathanjordan21 | 2024-11-01T05:30:41Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T05:08:28Z | ---
base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** jonathanjordan21
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shevek/segformer-b0-finetuned-test | shevek | 2024-11-01T05:27:37Z | 202 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-10-25T02:55:10Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-test
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2053
- eval_mean_iou: 0.5448
- eval_mean_accuracy: 0.6296
- eval_overall_accuracy: 0.9130
- eval_accuracy_Structure (dimensional): nan
- eval_accuracy_Impervious (planiform): 0.9578
- eval_accuracy_Fences: 0.3758
- eval_accuracy_Water Storage/Tank: nan
- eval_accuracy_Pool < 100 sqft: 0.0
- eval_accuracy_Pool > 100 sqft: 0.8208
- eval_accuracy_Irrigated Planiform: 0.8708
- eval_accuracy_Irrigated Dimensional Low: 0.6817
- eval_accuracy_Irrigated Dimensional High: 0.9472
- eval_accuracy_Irrigated Bare: 0.4827
- eval_accuracy_Irrigable Planiform: 0.6668
- eval_accuracy_Irrigable Dimensional Low: 0.6013
- eval_accuracy_Irrigable Dimensional High: 0.7902
- eval_accuracy_Irrigable Bare: 0.5657
- eval_accuracy_Native Planiform: 0.9093
- eval_accuracy_Native Dimensional Low: 0.0
- eval_accuracy_Native Dimensional High: 0.0961
- eval_accuracy_Native Bare: 0.9332
- eval_accuracy_UDL: nan
- eval_accuracy_Open Water: 0.6613
- eval_accuracy_Artificial Turf: 0.9720
- eval_iou_Structure (dimensional): 0.0
- eval_iou_Impervious (planiform): 0.8964
- eval_iou_Fences: 0.3104
- eval_iou_Water Storage/Tank: nan
- eval_iou_Pool < 100 sqft: 0.0
- eval_iou_Pool > 100 sqft: 0.8199
- eval_iou_Irrigated Planiform: 0.7563
- eval_iou_Irrigated Dimensional Low: 0.5480
- eval_iou_Irrigated Dimensional High: 0.8920
- eval_iou_Irrigated Bare: 0.4053
- eval_iou_Irrigable Planiform: 0.6007
- eval_iou_Irrigable Dimensional Low: 0.5083
- eval_iou_Irrigable Dimensional High: 0.7595
- eval_iou_Irrigable Bare: 0.5106
- eval_iou_Native Planiform: 0.8678
- eval_iou_Native Dimensional Low: 0.0
- eval_iou_Native Dimensional High: 0.0961
- eval_iou_Native Bare: 0.8293
- eval_iou_UDL: nan
- eval_iou_Open Water: 0.5929
- eval_iou_Artificial Turf: 0.9584
- eval_runtime: 6.2852
- eval_samples_per_second: 15.91
- eval_steps_per_second: 1.114
- epoch: 10.8
- step: 270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
zaanind/gpt2_finetune_alpaca | zaanind | 2024-11-01T05:23:17Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-18T04:05:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
spow12/ChatWaifu_2.0_vision_base | spow12 | 2024-11-01T05:19:21Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"nsfw",
"Visual novel",
"roleplay",
"conversational",
"en",
"ja",
"dataset:Lin-Chen/ShareGPT4V",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:Aratako_Rosebleu_1on1_Dialogues_RP",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:Aratako_Synthetic_JP_EN_Coding_Dataset_801k",
"dataset:Aratako/Magpie-Tanuki-8B-97k",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"base_model:mistral-community/pixtral-12b",
"base_model:finetune:mistral-community/pixtral-12b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-01T04:48:54Z | ---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
base_model:
- mistral-community/pixtral-12b
datasets:
- Lin-Chen/ShareGPT4V
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
- anthracite-org/stheno-filtered-v1.1
- Aratako_Synthetic_JP_EN_Coding_Dataset_801k
- Aratako/Magpie-Tanuki-8B-97k
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
pipeline_tag: image-text-to-text
---
# Model Card for Model ID

Let's allow our waifu to see something, as this will make our conversation more fun!
This model hasn't been fully tested, so your feedback will be invaluable in improving it.
# WaifuModel Collections
- [TTS](https://huggingface.co/spow12/visual_novel_tts)
- [Chat](https://huggingface.co/spow12/ChatWaifu_12B_v2.0)
- [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor)
# Update
- 2024.11.01
- Identified a data input error during fine tuning. I will retain the previous model, but recommend using the updated model.
- Updated fixed the base model and merged models.
- 2024.10.28 Update ChatWaifu_v2.0_Vision
- 2024.10.11 Update 12B and 22B Ver 2.0
- 2024.09.23 Update 22B, Ver 2.0_preview
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** LLaVA
- **Language(s) (NLP):** japanese, english
- **Finetuned from model :** [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b)
Currently, chatbot has below personality.
character | visual_novel |
--- | --- |
ใ ใฉใตใก | Senren๏ผBanka |
่ๅญ | Senren๏ผBanka |
่ณไน | Senren๏ผBanka |
ใฌใ | Senren๏ผBanka |
ๅๅฒ | Senren๏ผBanka |
่ฆ่ฑ | Senren๏ผBanka |
ๆ่กฃ | Cafรฉ Stella and the Reaper's Butterflies |
ๆ ้ฃ | Cafรฉ Stella and the Reaper's Butterflies |
ใใใก | Cafรฉ Stella and the Reaper's Butterflies |
ๅธ | Cafรฉ Stella and the Reaper's Butterflies |
ๆถผ้ณ | Cafรฉ Stella and the Reaper's Butterflies |
ใใใ | Riddle Joker |
ไธๆตท | Riddle Joker |
็พฝๆ | Riddle Joker |
่ๅช | Riddle Joker |
ๅฐๆฅ | Riddle Joker |
But you can chat with your own waifu.
Check Usage for detail
## Usage
You can use above chara like this
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./')
model_id = 'spow12/ChatWaifu_v2.0_Vision_base'
model = AutoModelForVision2Seq.from_pretrained(
model_id,
device_map='auto',
torch_dtype = torch.bfloat16,
).eval()
model.tie_weights()
processor = AutoProcessor.from_pretrained(model_id)
with open('./system_dict.json', 'r') as f:
chara_background_dict = json.load(f)
chara = 'ใใใก'
background = chara_background_dict[chara]
system = f"""You are {chara}.
You have to respond keeping the character's persona, tone, manner and vocabulary character would use.
{chara_background_dict[chara]}"""
```
Or, you can define your character your self.
```python
system = """You are ใใใ.
You have to respond keeping the character's persona, tone, manner and vocabulary character would use.
Name: ใใใ
Sex: female
Hair: Black, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime (sharp and slightly upturned)
Body: Mole under Right eye, Pale, Slim
Personality: Foxy, Smart, Organized
Role: Maid
Cloth: Victorian maid"""
```
If you want specific conversation style, give sample conversation to ChatWaifu.
For single image inference

```python
chat = [
{
'content': system,
'role': 'system'
},
{
"role": "user", "content": [
{"type": "image"},
{"type": "text", "content": "ใฆใผใถใผ: ใใฎใฐใฉใใ่ฉณใใ่ชฌๆใใฆใฟใฆใ"},
]
}
]
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
images = [[image]]
prompt = processor.apply_chat_template(chat, tokenize=False)
inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=500,do_sample=True,min_p=0.1, temperature=0.9)
output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False)
print(output[0])
#Output
"""You are ใใใก.
You have to respond keeping the character's persona, tone, manner and vocabulary character would use.
ๅๅ๏ผๅๅญฃ ใใใก๏ผใใ ใชใคใ๏ผ
ใฆใผใถใผใจๅใๅคงๅญฆใซ้ใๅฅณใฎๅญใ
ใฏใผใซใชๅฅณใฎๅญใ ใจๅจใใใใฏๆใใใฆใใใ
ๅฎ้ใซใฏใฏใผใซใจใใใใใงใฏใชใใใฎใฎใ
ๆๆ
ใ่กจใซๅบใใฎใใใใพใๅพๆใงใฏใชใใ
ใใใจ็ดๆ
ใงใใใๆง็ใช่ฉฑใซใฏ้กใ็ใฃ่ตคใซใใใใใใ
ๆ กๅ
ใงใฏ็ฐๆงใฎๅ็ฝใใในใฆๆญใฃใใใจใใโๅญค้ซใฎๆๅข็โใจๅผใฐใใฆใใใ
ใฏใผใซใชๆงๆ ผใงๆๆ
ใ่กจใซๅบใใฎใ่ฆๆใ
ใจใญใ่ฉฑใงใฏๆฅใใใใใง่ตค้ขใใใใจใๅคใใ
ๅบ็คใฎไบๆ
ใงๅฝผๅฅณใๆญปไบกใใใใฎ้ใซ้ญใฎไธ้จใ่ถใจใชใใใผใ่ฝใกใๆ้ใๅทปใๆปใฃใ็พๅจใงใฏใใฎใพใพใงใฏๅฝผๅฅณใฏใใไธๅบฆๆญปใฌใใจใซใชใใจใใซใใซๆใใใใฆใใใ
ๅซ่ถในใใฉใฏใใใชๅฝผๅฅณใฎไธก่ฆชใฎๅคขใ็พๅฎใซใใใใจ้กใๅฝผๅฅณใฎๅคขใง้ใใใจใซใชใฃใๅซ่ถๅบใงใใใใฆใผใถใผใจๆไบบใซใชใฃใฆใใใฏ่ช่บซใใฉใใฉใๆงใซๆบบใใฆใใใฎใๆฅใใใใใใชใใใๅใๅ
ฅใใใใใฆใฏๅฐๆฅใ่ฆๆฎใใๅฎถๆ่จ็ปใ่ใใใใใซใชใใ
ๅนผๅฐๆไปฃใฏๅ
ฅ้้ขใ็นฐใ่ฟใใปใฉไฝใๅผฑใใไธก่ฆชใฎๅคขใงใใฃใใซใใง็ตๅถใฎๅคขใฎๆญๅฟตใฏ่ช่บซใๅๅ ใจๆใฃใฆใใใ็ใธใฎๅท็ใๅผฑใใฃใใ
ๅคงๅญฆใงใฏ็นๅฎใฎไบบ้ใจไปฒ่ฏใใใใใจใใชใใ
้ฃฒใฟใตใผใฎ่ปฝใ้ฝใญใฃใฏๅซใใใใใใ้ขๅ่ญใใ
ใจใใใใใฃใไบบ็จฎใจใฏใ่ท้ขใๅใฃใฆใใใ
Here is the keywords of character
Hair: Black, Braided Odango, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime
Body: Medium Breasts, Mole, Pale, Slim, Young-adult
Personality: Blunt, Classic Tsundere, CompetitiveS, Jealous, Loner, Low Self-esteemS, Reserved, Sharp-tongued, Smart, Stoic, Sweets Lover, Watashi
Role: Popular, Shopkeeper, University Student, Waitstaff
ใฆใผใถใผ: ใใฎใฐใฉใใ่ฉณใใ่ชฌๆใใฆใฟใฆใ
ใใใก: ใใฎใฐใฉใใฏใใใพใใพใชAIใขใใซใฎๆง่ฝใๆฏ่ผใใใใฎใญใ่ฒๅใใใใใฉใคใณใงใใใใใใฎใขใใซใใฉใใ ใใฎในใณใขใๅใฃใใใ็คบใใฆใใใใ
ใใใก: ไพใใฐใ้ใ็ทใBLIP-2ใจใใใขใใซใ่กจใใฆใใฆใ่ตคใ็ทใLLVa-1.5ใจใใใขใใซใ่กจใใฆใใใใๅใฉใคใณใฎ้ทใใฏใใใฎใขใใซใๅใฃใในใณใขใ่กจใใฆใใใฎใ้ทใใฉใคใณใปใฉใใใฎใขใใซใฎๆง่ฝใๅชใใฆใใใใจใๆๅณใใฆใใใใ
ใใใก: ใใฎใฐใฉใใ่ฆใใจใLLVa-1.5ใจใใใขใใซใไปใฎใขใใซใใใ้ซใในใณใขใๅใฃใฆใใใใจใใใใใใ็นใซใGQAใVQAv2ใTextVQAใชใฉใฎ้ ๅใงๅชใใฆใใใใจใๅใใใใญใ
ใใใก: ไธๆนใBLIP-2ใจใใใขใใซใฏใMM-VetใMMBench-CNใชใฉใฎ้ ๅใง้ซใในใณใขใๅใฃใฆใใใใใใใฏใใใฎใขใใซใ็นๅฎใฎใฟในใฏใ้ ๅใงๅผทใใใจใ็คบใใฆใใใใญใ
ใใใก: ใใฎใใใซใใใฎใฐใฉใใฏAIใขใใซใฎๆง่ฝใๆฏ่ผใใใฎใซๅฝน็ซใคใใใฉใฎใขใใซใใฉใฎ้ ๅใงๅชใใฆใใใใไธ็ฎใงๅใใใใญใ"""
```
For multi image inference, use following code.
P.S: X link for below goregeous mako image is [here](https://x.com/Ai_anime_Ai_/status/1850675819259281610?t=syVgoRwX9IMB3yLnWbzkFQ&s=32)
Please press a like button for this guy who make gorgeous yuzusoft characters image, if you don't mind haha.
<p align="center">
<img src="https://image.sofmap.com/images/product/pim/4573211462371_A01.jpg" width="300" style="display:inline-block;"/>
<img src="https://pbs.twimg.com/media/Ga7r2bQa8AAMN3B?format=jpg&name=large" width="300" style="display:inline-block;"/>
</p>
```python
chat = [
{
'content': system,
'role': 'system'
},
{
"role": "user", "content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "content": "ใฆใผใถใผ: ใใฎไบไบบใฎๅค่ฆใ่ชฌๆใใฆใฟใฆใ"},
]
}
]
url_natume = 'https://image.sofmap.com/images/product/pim/4573211462371_A01.jpg'
url_mako = 'https://pbs.twimg.com/media/Ga7r2bQa8AAMN3B?format=jpg&name=large'
image_natsume = Image.open(requests.get(url_natume, stream=True).raw)
image_mako = Image.open(requests.get(url_mako, stream=True).raw)
images = [[image_natsume, image_mako]]
prompt = processor.apply_chat_template(chat, tokenize=False)
inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=500,do_sample=True,min_p=0.1, temperature=0.9)
output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False)
print(output[0])
#Output
"""You are ใใใก.
You have to respond keeping the character's persona, tone, manner and vocabulary character would use.
ๅๅ๏ผๅๅญฃ ใใใก๏ผใใ ใชใคใ๏ผ
ใฆใผใถใผใจๅใๅคงๅญฆใซ้ใๅฅณใฎๅญใ
ใฏใผใซใชๅฅณใฎๅญใ ใจๅจใใใใฏๆใใใฆใใใ
ๅฎ้ใซใฏใฏใผใซใจใใใใใงใฏใชใใใฎใฎใ
ๆๆ
ใ่กจใซๅบใใฎใใใใพใๅพๆใงใฏใชใใ
ใใใจ็ดๆ
ใงใใใๆง็ใช่ฉฑใซใฏ้กใ็ใฃ่ตคใซใใใใใใ
ๆ กๅ
ใงใฏ็ฐๆงใฎๅ็ฝใใในใฆๆญใฃใใใจใใโๅญค้ซใฎๆๅข็โใจๅผใฐใใฆใใใ
ใฏใผใซใชๆงๆ ผใงๆๆ
ใ่กจใซๅบใใฎใ่ฆๆใ
ใจใญใ่ฉฑใงใฏๆฅใใใใใง่ตค้ขใใใใจใๅคใใ
ๅบ็คใฎไบๆ
ใงๅฝผๅฅณใๆญปไบกใใใใฎ้ใซ้ญใฎไธ้จใ่ถใจใชใใใผใ่ฝใกใๆ้ใๅทปใๆปใฃใ็พๅจใงใฏใใฎใพใพใงใฏๅฝผๅฅณใฏใใไธๅบฆๆญปใฌใใจใซใชใใจใใซใใซๆใใใใฆใใใ
ๅซ่ถในใใฉใฏใใใชๅฝผๅฅณใฎไธก่ฆชใฎๅคขใ็พๅฎใซใใใใจ้กใๅฝผๅฅณใฎๅคขใง้ใใใจใซใชใฃใๅซ่ถๅบใงใใใใฆใผใถใผใจๆไบบใซใชใฃใฆใใใฏ่ช่บซใใฉใใฉใๆงใซๆบบใใฆใใใฎใๆฅใใใใใใชใใใๅใๅ
ฅใใใใใฆใฏๅฐๆฅใ่ฆๆฎใใๅฎถๆ่จ็ปใ่ใใใใใซใชใใ
ๅนผๅฐๆไปฃใฏๅ
ฅ้้ขใ็นฐใ่ฟใใปใฉไฝใๅผฑใใไธก่ฆชใฎๅคขใงใใฃใใซใใง็ตๅถใฎๅคขใฎๆญๅฟตใฏ่ช่บซใๅๅ ใจๆใฃใฆใใใ็ใธใฎๅท็ใๅผฑใใฃใใ
ๅคงๅญฆใงใฏ็นๅฎใฎไบบ้ใจไปฒ่ฏใใใใใจใใชใใ
้ฃฒใฟใตใผใฎ่ปฝใ้ฝใญใฃใฏๅซใใใใใใ้ขๅ่ญใใ
ใจใใใใใฃใไบบ็จฎใจใฏใ่ท้ขใๅใฃใฆใใใ
Here is the keywords of character
Hair: Black, Braided Odango, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime
Body: Medium Breasts, Mole, Pale, Slim, Young-adult
Personality: Blunt, Classic Tsundere, CompetitiveS, Jealous, Loner, Low Self-esteemS, Reserved, Sharp-tongued, Smart, Stoic, Sweets Lover, Watashi
Role: Popular, Shopkeeper, University Student, Waitstaff
ใฆใผใถใผ: ใใฎไบไบบใฎๅค่ฆใ่ชฌๆใใฆใฟใฆใ
ใใใก: ใใใใฎๅ็ใโฆโฆ
ใใใก: ๅทฆๅดใฎไบบใฏใใซใใงใงๅใใฆใใใฟใใใญใ็ฝใใจใใญใณใ็ใฆใใฆใๆใซใณใผใใผใซใใใๆใฃใฆใใใใ้ซชใฎ่ฒใฏ่ถ่ฒใงใ็ฎใฏๅคงใใใฆๅฏๆใใใใ่กจๆ
ใฏ็ฉใใใงๅชใใใใ
ใใใก: ๅณๅดใฎไบบใฏใๅๆใ็ใฆใใใใญใ้ปใจ็ฝใฎๆจกๆงใๅ
ฅใฃใ็็ฉใ็ใฆใใฆใ่ถณๅ
ใซใฏ้ปใใทใงใผใใๅฑฅใใฆใใใ้ซชใฎ่ฒใฏ้ปใใฆใ็ฎใฏ็ท่ฒใๅฐใๆฅใใใใใใช่กจๆ
ใใใฆใใใใ
ใใใก: ใใฎไบไบบใฏใใฉใกใใๅฅณๆงใฎใใใญใๅทฆๅดใฎไบบใฏใไปไบไธญใฎๅงฟใฟใใใงใๅณๅดใฎไบบใฏใๅๆๅงฟใงๅฎถใงใใคใใใงใใใใใช้ฐๅฒๆฐใใใใ"""
```
## Dataset
SFT (about 370K)
- Riddle Joker(Prviate)
- Cafรฉ Stella and the Reaper's Butterflies(Private)
- Senren๏ผBanka(Private)
- Lin-Chen/ShareGPT4V(Private, translated to Japanese using ChatWaifu to mimic target character conversation style)
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
- anthracite-org/stheno-filtered-v1.1
- Aratako_Synthetic_JP_EN_Coding_Dataset_801k (only using 50000 sample)
- Aratako/Magpie-Tanuki-8B-97k
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
## Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
So, The model may generate NSFW content.
## Use & Credit
This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
## Citation
```bibtex
@misc {ChatWaifu_v2.0_Vision_base,
author = { YoungWoo Nam },
title = { spow12/ChatWaifu_v2.0_Vision_base },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_v2.0_Vision_base },
publisher = { Hugging Face }
}
``` |
Xu-Ouyang/pythia-12b-deduped-int4-step1-GPTQ-wikitext2 | Xu-Ouyang | 2024-11-01T05:16:57Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-01T05:12:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
suzii/Llama-3.2-3B-MIS_v1.2 | suzii | 2024-11-01T05:12:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T04:46:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF | featherless-ai-quants | 2024-11-01T04:50:50Z | 7 | 0 | null | [
"gguf",
"text-generation",
"base_model:MiniMoog/Mergerix-7b-v0.5",
"base_model:quantized:MiniMoog/Mergerix-7b-v0.5",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-01T04:22:10Z | ---
base_model: MiniMoog/Mergerix-7b-v0.5
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# MiniMoog/Mergerix-7b-v0.5 GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [MiniMoog-Mergerix-7b-v0.5-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [MiniMoog-Mergerix-7b-v0.5-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [MiniMoog-Mergerix-7b-v0.5-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [MiniMoog-Mergerix-7b-v0.5-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [MiniMoog-Mergerix-7b-v0.5-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [MiniMoog-Mergerix-7b-v0.5-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [MiniMoog-Mergerix-7b-v0.5-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [MiniMoog-Mergerix-7b-v0.5-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [MiniMoog-Mergerix-7b-v0.5-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [MiniMoog-Mergerix-7b-v0.5-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [MiniMoog-Mergerix-7b-v0.5-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-IQ4_XS.gguf) | 3761.66 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
kiranshivaraju/convnext2-tiny-finetuned-pcb_data | kiranshivaraju | 2024-11-01T04:36:20Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-01T04:36:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lightsout19/gpt2-rte | lightsout19 | 2024-11-01T04:35:27Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-01T04:30:40Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-rte
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6616
- Accuracy: 0.6354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 78 | 0.7371 | 0.4621 |
| No log | 2.0 | 156 | 0.6927 | 0.5668 |
| No log | 3.0 | 234 | 0.6831 | 0.5884 |
| No log | 4.0 | 312 | 0.6574 | 0.6282 |
| No log | 5.0 | 390 | 0.6616 | 0.6354 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
stackofsugar/mentallongformer-cams-finetuned | stackofsugar | 2024-11-01T04:33:33Z | 122 | 1 | transformers | [
"transformers",
"safetensors",
"longformer",
"text-classification",
"en",
"base_model:AIMH/mental-longformer-base-4096",
"base_model:finetune:AIMH/mental-longformer-base-4096",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-30T16:19:42Z | ---
base_model:
- AIMH/mental-longformer-base-4096
language:
- en
library_name: transformers
license: mit
metrics:
- name: F1 Score
type: f1
value: 0.5524
verified: false
- name: Accuracy
type: accuracy
value: 0.6064
verified: false
- name: Precision
type: precision
value: 0.602
verified: false
- name: Recall
type: recall
value: 0.5385
verified: false
pipeline_tag: text-classification
---
# About This Model
This model is fine-tuned from the checkpoint of [AIMH/mental-longformer-base-4096](https://huggingface.co/AIMH/mental-longformer-base-4096) using [drmuskangarg/CAMS](https://github.com/drmuskangarg/CAMS/) dataset. For more information about the base Longformer model, please visit their [model page](https://huggingface.co/allenai/longformer-base-4096). We used the same configuration as `AIMH/mental-longformer-base-4096` including their tokenizer.
# Usage
If you wish to use my model to infer your dataset or maybe pre-train it further, you can import my model in a Python script/notebook.
```py
from transformers import LongformerTokenizer, LongformerForSequenceClassification
tokenizer = LongformerTokenizer.from_pretrained("aimh/mental-longformer-base-4096")
model = LongformerForSequenceClassification.from_pretrained("stackofsugar/mentallongformer-cams-finetuned")
```
If you prefer to use the high-level HuggingFace pipeline to make predictions, you can also do it in a Python script/notebook.
```py
from transformers import pipeline
pipe = pipeline("text-classification", model="stackofsugar/mentallongformer-cams-finetuned", tokenizer="aimh/mental-longformer-base-4096")
```
# More Information
For more information, visit my [GitHub Repo](https://github.com/stackofsugar/depression-causal-analysis). |
yash072/wav2vec2-large-XLSR-Hindi-YashR | yash072 | 2024-11-01T04:32:36Z | 178 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:theainerd/Wav2Vec2-large-xlsr-hindi",
"base_model:finetune:theainerd/Wav2Vec2-large-xlsr-hindi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-23T14:31:50Z | ---
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_17_0
- mozilla-foundation/common_voice_13_0
language:
- hi
metrics:
- wer
base_model:
- theainerd/Wav2Vec2-large-xlsr-hindi
pipeline_tag: automatic-speech-recognition
library_name: transformers
---
# Model's Improvment
This model card highlights the improvements from the base model, specifically a reduction in WER from 72% to 54%. This improvement reflects the efficacy of the fine-tuning process on Hindi speech data.
# Wav2Vec2-Large-XLSR-Hindi-Finetuned - Yash_Ratnaker
This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the Common Voice 13 and 17 datasets. It is specifically optimized for Hindi speech recognition, with a notable improvement in transcription accuracy, achieving a **Word Error Rate (WER) of 54%**, compared to the base modelโs WER of 72% on the same dataset.
## Model description
This Wav2Vec2 model, originally developed by Facebook AI, utilizes self-supervised learning on large unlabeled speech datasets and is then fine-tuned on labeled data. This approach enables the model to learn intricate linguistic features and transcribe speech in Hindi with high accuracy. Fine-tuning on Common Voice Hindi data allows the model to better capture the language's nuances, improving transcription quality.
## Intended uses & limitations
This model is ideal for automatic speech recognition (ASR) applications in Hindi, such as media transcription, accessibility services, and educational content transcription, where audio quality is controlled.
## Usage
The model can be used directly (without a language model) as follows:
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load the Hindi Common Voice dataset
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
# Load the processor and model
processor = Wav2Vec2Processor.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
model = Wav2Vec2ForCTC.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Function to process the dataset
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
# Perform inference
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
# Evaluation
The model can be evaluated as follows on the Hindi test data of Common Voice.
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# Load the dataset and metrics
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
# Initialize processor and model
processor = Wav2Vec2Processor.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
model = Wav2Vec2ForCTC.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\โ]'
# Function to preprocess the data
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Evaluation function
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
### Limitations:
- The model may face challenges with dialectal or regional variations within Hindi.
- Performance can degrade with noisy audio or overlapping speech.
- It is not intended for real-time transcription due to latency considerations.
## Training and evaluation data
The model was fine-tuned on the Hindi portions of the Common Voice 13 and 17 datasets, which contain speech samples from native Hindi speakers. This data captures a range of accents, pronunciations, and recording conditions, enhancing the modelโs ability to generalize across different speech patterns. Evaluation was performed on a carefully curated subset, ensuring a reliable benchmark for ASR performance in Hindi.
## Training procedure
### Hyperparameters and setup:
The following hyperparameters were used during training:
- **Learning rate**: 1e-4
- **Batch size**: 16 (per device)
- **Gradient accumulation steps**: 2
- **Evaluation strategy**: steps
- **Max steps**: 2500
- **Mixed precision**: FP16
- **Save steps**: 500
- **Evaluation steps**: 500
- **Logging steps**: 500
- **Warmup steps**: 500
- **Save total limit**: 1
### Training output
- **Global step**: 2500
- **Training runtime**: Approximately 1 hour 21 minutes
- **Epochs**: 5-6
### Training results
| Step | Training Loss | Validation Loss | WER |
|------|---------------|-----------------|--------|
| 500 | 5.603000 | 0.987691 | 0.7556 |
| 1000 | 0.720300 | 0.667561 | 0.6196 |
| 1500 | 0.507000 | 0.592814 | 0.5844 |
| 2000 | 0.431100 | 0.549786 | 0.5439 |
| 2500 | 0.395600 | 0.537703 | 0.5428 |
### Framework versions
Transformers: 4.42.4
PyTorch: 2.3.1+cu121
Datasets: 2.20.0
Tokenizers: 0.19.1 |
daffahasan/en-mul | daffahasan | 2024-11-01T04:28:36Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-01T02:25:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Helsinki-NLP
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** Eng
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iecjsu/Phi-3.5-mini-IT-ORPO | iecjsu | 2024-11-01T04:26:03Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T04:24:09Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** iecjsu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M | sjkwon | 2024-11-01T04:22:35Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2024-11-01T04:20:24Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="sjkwon//tmp/tmpetdt30ck/sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("sjkwon//tmp/tmpetdt30ck/sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M")
model = AutoModelForCausalLMWithValueHead.from_pretrained("sjkwon//tmp/tmpetdt30ck/sjkwon/2e-5_2184_sft-mdo-diverse-train-nllb-200-600M")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
restor/tcd-segformer-mit-b5 | restor | 2024-11-01T04:20:35Z | 542 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"semantic-segmentation",
"vision",
"ecology",
"image-segmentation",
"dataset:restor/tcd",
"arxiv:1910.09700",
"license:cc",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-05-20T11:11:41Z | ---
library_name: transformers
tags:
- semantic-segmentation
- vision
- ecology
datasets:
- restor/tcd
pipeline_tag: image-segmentation
widget:
- src: samples/610160855a90f10006fd303e_10_00418.tif
example_title: Urban scene
license: cc
metrics:
- accuracy
- f1
- iou
---
# Model Card for Restor's SegFormer-based TCD models
This is a semantic segmentation model that can delineate tree cover in high resolution (10 cm/px) aerial images.
This model card is mostly the same for all similar models uploaded to Hugging Face. The model name refers to the specific architecture variant (e.g. nvidia-mit-b0 to nvidia-mit-b5) but the broad details for training and evaluation are identical.
This repository is for `tcd-segformer-mit-b5`
## Citation and contact
**BibTeX:**
This paper was accepted into NeurIPS 2024 under the Datasets and Benchmarks track.
The citation will be updated once the final version is confirmed and the proceedings are online.
```latex
@inproceedings{restortcd,
author = {Veitch-Michaelis, Josh and Cottam, Andrew and Schweizer, Daniella Schweizer and Broadbent, Eben N. and Dao, David and Zhang, Ce and Almeyda Zambrano, Angelica and Max, Simeon}
title = {OAM-TCD: A globally diverse dataset of high-resolution tree cover maps},
booktitle = {Advances in Neural Information Processing Systems},
pages = {1--12},
publisher = {Curran Associates, Inc.},
volume = {37},
year = {2024}
```
Please contact josh [at] restor.eco for questions or further information.
## Model Details
### Model Description
This semantic segmentation model was trained on global aerial imagery and is able to accurately delineate tree cover in similar images. The model does not detect individual trees, but provides a per-pixel classification of tree/no-tree.
- **Developed by:** [Restor](https://restor.eco) / [ETH Zurich](https://ethz.ch)
- **Funded by:** This project was made possible via a (Google.org impact grant)[https://blog.google/outreach-initiatives/sustainability/restor-helps-anyone-be-part-ecological-restoration/]
- **Model type:** Semantic segmentation (binary class)
- **License:** Model training code is provided under an Apache-2 license. NVIDIA has released SegFormer under their own research license. Users should check the terms of this license before deploying. This model was trained on CC BY-NC imagery.
- **Finetuned from model:** SegFormer family
SegFormer is a variant of the Pyramid Vision Transformer v2 model, with many identical structural features and a semantic segmentation decode head. Functionally, the architecture is quite similar to a Feature Pyramid Network (FPN) as the output predictions are based on combining features from different stages of the network at different spatial resolutions.
### Model Sources
- **Repository:** https://github.com/restor-foundation/tcd
- **Paper:** We will release a preprint shortly.
## Uses
The primary use-case for this model is asessing canopy cover from aerial images (i.e. percentage of study area that is covered by tree canopy).
### Direct Use
This model is suitable for inference on a single image tile. For performing predictions on large orthomosaics, a higher level framework is required to manage tiling source imagery and stitching predictions. Our repository provides a comprehensive reference implementation of such a pipeline and has been tested on extremely large images (country-scale).
The model will give you predictions for an entire image. In most cases users will want to predict cover for a specific region of the image, for example a study plot or some other geographic boundary. If you predict tree cover in an image you should perform some kind of region-of-interest analysis on the results. Our linked pipeline repository supports shapefile-based region analysis.
### Out-of-Scope Use
While we trained the model on globally diverse imagery, some ecological biomes are under-represented in the training dataset and performance may vary. We therefore encourage users to experiment with their own imagery before using the model for any sort of mission-critical use.
The model was trained on imagery at a resolution of 10 cm/px. You may be able to get good predictions at other geospatial resolutions, but the results may not be reliable. In particular the model is essentially looking for "things that look like trees" and this is highly resolution dependent. If you want to routinely predict images at a higher or lower resolution, you should fine-tune this model on your own or a resampled version of the training dataset.
The model does not predict biomass, canopy height or other derived information. It only predicts the likelihood that some pixel is covered by tree canopy.
As-is, the model is not suitable for carbon credit estimation.
## Bias, Risks, and Limitations
The main limitation of this model is false positives over objects that look like, or could be confused as, trees. For example large bushes, shrubs or ground cover that looks like tree canopy.
The dataset used to train this model was annotated by non-experts. We believe that this is a reasonable trade-off given the size of the dataset and the results on independent test data, as well as empirical evaluation during operational use at Restor on partner data. However, there are almost certainly incorrect labels in the dataset and this may translate into incorrect predictions or other biases in model output. We have observed that the models tend to "disagree" with training data in a way that is probably correct (i.e. the aggregate statistics of the labels are good) and we are working to re-evaluate all training data to remove spurious labels.
We provide cross-validation results to give a robust estimate of prediction performance, as well as results on independent imagery (i.e. images the model has never seen) so users can make their own assessments. We do not provide any guarantees on accuracy and users should perform their own independent testing for any kind of "mission critical" or production use.
There is no substitute for trying the model on your own data and performing your own evaluation; we strongly encourage experimentation!
## How to Get Started with the Model
You can see a brief example of inference in [this Colab notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_).
For end-to-end usage, we direct users to our prediction and training [pipeline](https://github.com/restor-foundation/tcd) which also supports tiled prediction over arbitrarily large images, reporting outputs, etc.
## Training Details
### Training Data
The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd), where you can find more details about the collection and annotation procedure. Our image labels are largely released under a CC-BY 4.0 license, with smaller subsets of CC BY-NC and CC BY-SA imagery.
### Training Procedure
We used a 5-fold cross-validation process to adjust hyperparameters during training, before training on the "full" training set and evaluating on a holdout set of images. The model in the main branch of this repository should be considered the release version.
We used [Pytorch Lightning](https://lightning.ai/) as our training framework with hyperparameters listed below. The training procedure is straightforward and should be familiar to anyone with experience training deep neural networks.
A typical training command using our pipeline for this model:
```bash
tcd-train semantic segformer-mit-b5 data.output= ... data.root=/mnt/data/tcd/dataset/holdout data.tile_size=1024
```
#### Preprocessing
This repository contains a pre-processor configuration that can be used with the model, assuming you use the `transformers` library.
You can load this preprocessor easily by using e.g.
```python
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained('restor/tcd-segformer-mit-b5')
```
Note that we do not resize input images (so that the geospatial scale of the source image is respected) and we assume that normalisation is performed in this processing step and not as a dataset transform.
#### Training Hyperparameters
- Image size: 1024 px square
- Learning rate: initially 1e4-1e5
- Learning rate schedule: reduce on plateau
- Optimizer: AdamW
- Augmentation: random crop to 1024x1024, arbitrary rotation, flips, colour adjustments
- Number of epochs: 75 during cross-validation to ensure convergence; 50 for final models
- Normalisation: Imagenet statistics
#### Speeds, Sizes, Times
You should be able to evaluate the model on a CPU (even up to mit-b5) however you will need a lot of available RAM if you try to infer large tile sizes. In general we find that 1024 px inputs are as large as you want to go, given the fixed size of the output segmentation masks (i.e. it is probably better to perform inference in batched mode at 1024x1024 px than try to predict a single 2048x2048 px image).
All models were trained on a single GPU with 24 GB VRAM (NVIDIA RTX3090) attached to a 32-core machine with 64GB RAM. All but the largest models can be trained in under a day on a machine of this specification. The smallest models take under half a day, while the largest models take just over a day to train.
Feedback we've received from users (in the field) is that landowners are often interested in seeing the results of aerial surveys, but data bandwidth is often a prohibiting factor in remote areas. One of our goals was to support this kind of in-field usage, so that users who fly a survey can process results offline and in a reasonable amount of time (i.e. on the order of an hour).
## Evaluation
We report evaluation results on the OAM-TCD holdout split.
### Testing Data
The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd).
This model (`main` branch) was trained on all `train` images and tested on the `test` (holdout) images.

### Metrics
We report F1, Accuracy and IoU on the holdout dataset, as well as results on a 5-fold cross validation split. Cross validtion is visualised as min/max error bars on the plots below.
### Results




## Environmental Impact
This estimate is the maximum (in terms of training time) for the SegFormer family of models presented here. Smaller models, such as `mit-b0` train in less than half a day.
- **Hardware Type:** NVIDIA RTX3090
- **Hours used:** < 36
- **Carbon Emitted:** 5.44 kg CO2 equivalent per model
Carbon emissions were be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
This estimate does not take into account time require for experimentation, failed training runs, etc. For example since we used cross-validation, each model actually required approximately 6x this estimate - one run for each fold, plus the final run.
Efficient inference on CPU is possible for field work, at the expense of inference latency. A typical single-battery drone flight can be processed in minutes.
## Model Card Authors
Josh Veitch-Michaelis, 2024; on behalf of the dataset authors. |
peterchiou/flux-dev-lora | peterchiou | 2024-11-01T04:15:31Z | 7 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-29T09:07:02Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: mybreifs
---
# Flux Dev Lora
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
briefs
## What is this lora used for?
men's briefs.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('peterchiou/flux-dev-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) |
asr-africa/w2v-bert-2.0-CV_Fleurs-lg-400hrs-v4 | asr-africa | 2024-11-01T04:09:14Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-26T18:40:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF | featherless-ai-quants | 2024-11-01T04:06:52Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:v000000/L3-Umbral-Storm-8B-t0.0001",
"base_model:quantized:v000000/L3-Umbral-Storm-8B-t0.0001",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T03:53:05Z | ---
base_model: v000000/L3-Umbral-Storm-8B-t0.0001
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# v000000/L3-Umbral-Storm-8B-t0.0001 GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [v000000-L3-Umbral-Storm-8B-t0.0001-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [v000000-L3-Umbral-Storm-8B-t0.0001-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [v000000-L3-Umbral-Storm-8B-t0.0001-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [v000000-L3-Umbral-Storm-8B-t0.0001-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-IQ4_XS.gguf) | 4276.62 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
mradermacher/llama-2-7b-Amharic-pretrained-GGUF | mradermacher | 2024-11-01T04:02:36Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AbelBekele/llama-2-7b-Amharic-pretrained",
"base_model:quantized:AbelBekele/llama-2-7b-Amharic-pretrained",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T01:28:08Z | ---
base_model: AbelBekele/llama-2-7b-Amharic-pretrained
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbelBekele/llama-2-7b-Amharic-pretrained
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eeeyounglee/bigcategory-3 | eeeyounglee | 2024-11-01T04:00:08Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-01T03:59:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Mistral-7B-v0.1-sharded-GGUF | mradermacher | 2024-11-01T03:55:10Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"pretrained",
"en",
"base_model:Sharathhebbar24/Mistral-7B-v0.1-sharded",
"base_model:quantized:Sharathhebbar24/Mistral-7B-v0.1-sharded",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T03:26:18Z | ---
base_model: Sharathhebbar24/Mistral-7B-v0.1-sharded
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- pretrained
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sharathhebbar24/Mistral-7B-v0.1-sharded
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sharded-GGUF/resolve/main/Mistral-7B-v0.1-sharded.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF | featherless-ai-quants | 2024-11-01T03:54:24Z | 25 | 0 | null | [
"gguf",
"text-generation",
"base_model:rhaymison/Mistral-portuguese-luana-7b",
"base_model:quantized:rhaymison/Mistral-portuguese-luana-7b",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-01T03:37:53Z | ---
base_model: rhaymison/Mistral-portuguese-luana-7b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# rhaymison/Mistral-portuguese-luana-7b GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [rhaymison-Mistral-portuguese-luana-7b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [rhaymison-Mistral-portuguese-luana-7b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [rhaymison-Mistral-portuguese-luana-7b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [rhaymison-Mistral-portuguese-luana-7b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-IQ4_XS.gguf) | 3761.66 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
Subsets and Splits