modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
carlosleao/FER-Facial-Expression-Recognition
|
carlosleao
| 2024-11-06T01:54:52Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:motheecreator/vit-Facial-Expression-Recognition",
"base_model:finetune:motheecreator/vit-Facial-Expression-Recognition",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-27T23:18:02Z |
---
library_name: transformers
base_model: motheecreator/vit-Facial-Expression-Recognition
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FER-Facial-Expression-Recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FER-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4710
- Accuracy: 0.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.8868 | 0.8959 | 100 | 1.7638 | 0.5923 |
| 1.2277 | 1.7962 | 200 | 1.1092 | 0.7253 |
| 0.8414 | 2.6965 | 300 | 0.8105 | 0.8041 |
| 0.7076 | 3.5969 | 400 | 0.6746 | 0.8256 |
| 0.6079 | 4.4972 | 500 | 0.6111 | 0.8287 |
| 0.5624 | 5.3975 | 600 | 0.5529 | 0.8379 |
| 0.5254 | 6.2979 | 700 | 0.5266 | 0.8399 |
| 0.4784 | 7.1982 | 800 | 0.4978 | 0.8433 |
| 0.4634 | 8.0985 | 900 | 0.4844 | 0.8458 |
| 0.4305 | 8.9944 | 1000 | 0.4710 | 0.8474 |
| 0.3995 | 9.8947 | 1100 | 0.4381 | 0.8564 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
SylvanL/ChatTCM-7B-LORA
|
SylvanL
| 2024-11-06T01:53:20Z | 19 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2024-10-26T11:18:43Z |
---
license: apache-2.0
---
### Coming Soon...
```
***** train metrics *****
epoch = 1.0
num_input_tokens_seen = 417873560
total_flos = 73823561GF
train_loss = 1.0007
train_runtime = 6 days, 5:47:01.98
train_samples_per_second = 1.571
train_steps_per_second = 0.031
```
|
minoosh/bert-clf-biencoder-cross_entropy
|
minoosh
| 2024-11-06T01:49:45Z | 14 | 0 | null |
[
"pytorch",
"safetensors",
"bert",
"classification",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2024-11-05T17:40:14Z |
---
language: en
tags:
- bert
- classification
- pytorch
pipeline_tag: text-classification
---
# BiEncoder Classification Model
This model is a BiEncoder architecture based on BERT for text pair classification.
## Model Details
- Base Model: bert-base-uncased
- Architecture: BiEncoder with BERT base
- Number of classes: 4
## Usage
```python
from transformers import AutoTokenizer
import torch
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("minoosh/bert-clf-biencoder-cross_entropy")
# Load model weights
state_dict = torch.load("pytorch_model.bin")
# Initialize model (you'll need the BiEncoderModel class)
model = BiEncoderModel(
base_model=AutoModel.from_pretrained("bert-base-uncased"),
num_classes=4
)
model.load_state_dict(state_dict)
```
|
yjwon/mp_mistral7bv3_sft_ogd_rms_epoch1
|
yjwon
| 2024-11-06T01:49:13Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T01:46:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
8Spark/llama381binstruct_summarize_short_merged
|
8Spark
| 2024-11-06T01:48:28Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-06T01:45:20Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ASAP_FineTuningBERT_Aug_k10_task1_organization_fold1
|
MayBashendy
| 2024-11-06T01:46:06Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T00:09:20Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k10_task1_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k10_task1_organization_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4343
- Qwk: 0.6828
- Mse: 0.4343
- Rmse: 0.6590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 0.0109 | 2 | 14.1789 | 0.0 | 14.1789 | 3.7655 |
| No log | 0.0219 | 4 | 9.4601 | -0.0002 | 9.4601 | 3.0757 |
| No log | 0.0328 | 6 | 6.7039 | 0.0088 | 6.7039 | 2.5892 |
| No log | 0.0437 | 8 | 5.2235 | 0.0 | 5.2235 | 2.2855 |
| No log | 0.0546 | 10 | 5.2435 | 0.0 | 5.2435 | 2.2899 |
| No log | 0.0656 | 12 | 3.8311 | -0.0245 | 3.8311 | 1.9573 |
| No log | 0.0765 | 14 | 3.2489 | 0.0162 | 3.2489 | 1.8025 |
| No log | 0.0874 | 16 | 2.6749 | 0.0040 | 2.6749 | 1.6355 |
| No log | 0.0984 | 18 | 2.3878 | 0.0055 | 2.3878 | 1.5452 |
| No log | 0.1093 | 20 | 1.8945 | 0.0 | 1.8945 | 1.3764 |
| No log | 0.1202 | 22 | 1.6392 | 0.0 | 1.6392 | 1.2803 |
| No log | 0.1311 | 24 | 1.2912 | 0.1113 | 1.2912 | 1.1363 |
| No log | 0.1421 | 26 | 1.1422 | 0.0400 | 1.1422 | 1.0687 |
| No log | 0.1530 | 28 | 1.0130 | 0.0106 | 1.0130 | 1.0065 |
| No log | 0.1639 | 30 | 0.9263 | 0.0 | 0.9263 | 0.9625 |
| No log | 0.1749 | 32 | 0.8953 | 0.0 | 0.8953 | 0.9462 |
| No log | 0.1858 | 34 | 0.9182 | 0.0 | 0.9182 | 0.9582 |
| No log | 0.1967 | 36 | 0.9318 | 0.0 | 0.9318 | 0.9653 |
| No log | 0.2077 | 38 | 1.0254 | 0.0424 | 1.0254 | 1.0126 |
| No log | 0.2186 | 40 | 1.0156 | 0.0586 | 1.0156 | 1.0078 |
| No log | 0.2295 | 42 | 0.9197 | 0.0742 | 0.9197 | 0.9590 |
| No log | 0.2404 | 44 | 1.0067 | 0.0 | 1.0067 | 1.0033 |
| No log | 0.2514 | 46 | 0.9330 | 0.0 | 0.9330 | 0.9659 |
| No log | 0.2623 | 48 | 0.9484 | 0.2619 | 0.9484 | 0.9739 |
| No log | 0.2732 | 50 | 0.8811 | 0.0119 | 0.8811 | 0.9387 |
| No log | 0.2842 | 52 | 0.8344 | 0.0379 | 0.8344 | 0.9135 |
| No log | 0.2951 | 54 | 0.8245 | 0.0482 | 0.8245 | 0.9080 |
| No log | 0.3060 | 56 | 0.8316 | 0.0315 | 0.8316 | 0.9119 |
| No log | 0.3169 | 58 | 0.9018 | 0.0106 | 0.9018 | 0.9496 |
| No log | 0.3279 | 60 | 0.9466 | 0.0106 | 0.9466 | 0.9730 |
| No log | 0.3388 | 62 | 0.9551 | 0.0106 | 0.9551 | 0.9773 |
| No log | 0.3497 | 64 | 0.8340 | 0.0316 | 0.8340 | 0.9132 |
| No log | 0.3607 | 66 | 0.7705 | 0.0663 | 0.7705 | 0.8778 |
| No log | 0.3716 | 68 | 0.7504 | 0.0645 | 0.7504 | 0.8663 |
| No log | 0.3825 | 70 | 0.7620 | 0.0707 | 0.7620 | 0.8729 |
| No log | 0.3934 | 72 | 0.7448 | 0.0969 | 0.7448 | 0.8630 |
| No log | 0.4044 | 74 | 0.7023 | 0.0828 | 0.7023 | 0.8381 |
| No log | 0.4153 | 76 | 0.7446 | 0.0418 | 0.7446 | 0.8629 |
| No log | 0.4262 | 78 | 0.8371 | 0.0315 | 0.8371 | 0.9149 |
| No log | 0.4372 | 80 | 0.8154 | 0.0418 | 0.8154 | 0.9030 |
| No log | 0.4481 | 82 | 0.6675 | 0.1040 | 0.6675 | 0.8170 |
| No log | 0.4590 | 84 | 0.6637 | 0.1867 | 0.6637 | 0.8147 |
| No log | 0.4699 | 86 | 0.8092 | 0.2324 | 0.8092 | 0.8996 |
| No log | 0.4809 | 88 | 0.6696 | 0.1772 | 0.6696 | 0.8183 |
| No log | 0.4918 | 90 | 0.7069 | 0.0521 | 0.7069 | 0.8408 |
| No log | 0.5027 | 92 | 0.7755 | 0.1424 | 0.7755 | 0.8806 |
| No log | 0.5137 | 94 | 0.8115 | 0.2443 | 0.8115 | 0.9009 |
| No log | 0.5246 | 96 | 0.7862 | 0.2914 | 0.7862 | 0.8867 |
| No log | 0.5355 | 98 | 0.7444 | 0.3185 | 0.7444 | 0.8628 |
| No log | 0.5464 | 100 | 0.7149 | 0.1163 | 0.7149 | 0.8455 |
| No log | 0.5574 | 102 | 0.7028 | 0.1742 | 0.7028 | 0.8383 |
| No log | 0.5683 | 104 | 0.5869 | 0.1738 | 0.5869 | 0.7661 |
| No log | 0.5792 | 106 | 0.6169 | 0.1706 | 0.6169 | 0.7854 |
| No log | 0.5902 | 108 | 0.5825 | 0.2129 | 0.5825 | 0.7632 |
| No log | 0.6011 | 110 | 0.6352 | 0.2685 | 0.6352 | 0.7970 |
| No log | 0.6120 | 112 | 0.7164 | 0.2895 | 0.7164 | 0.8464 |
| No log | 0.6230 | 114 | 0.6857 | 0.2737 | 0.6857 | 0.8281 |
| No log | 0.6339 | 116 | 0.6143 | 0.1874 | 0.6143 | 0.7838 |
| No log | 0.6448 | 118 | 0.7046 | 0.1114 | 0.7046 | 0.8394 |
| No log | 0.6557 | 120 | 0.9554 | 0.1044 | 0.9554 | 0.9774 |
| No log | 0.6667 | 122 | 0.9606 | 0.0872 | 0.9606 | 0.9801 |
| No log | 0.6776 | 124 | 0.8275 | 0.0583 | 0.8275 | 0.9096 |
| No log | 0.6885 | 126 | 0.7398 | 0.0583 | 0.7398 | 0.8601 |
| No log | 0.6995 | 128 | 0.7323 | 0.0583 | 0.7323 | 0.8558 |
| No log | 0.7104 | 130 | 0.7692 | 0.0707 | 0.7692 | 0.8770 |
| No log | 0.7213 | 132 | 0.7347 | 0.0583 | 0.7347 | 0.8571 |
| No log | 0.7322 | 134 | 0.7172 | 0.0583 | 0.7172 | 0.8469 |
| No log | 0.7432 | 136 | 0.7012 | 0.0583 | 0.7012 | 0.8374 |
| No log | 0.7541 | 138 | 0.6804 | 0.1179 | 0.6804 | 0.8248 |
| No log | 0.7650 | 140 | 0.7080 | 0.1642 | 0.7080 | 0.8414 |
| No log | 0.7760 | 142 | 0.6411 | 0.1654 | 0.6411 | 0.8007 |
| No log | 0.7869 | 144 | 0.6105 | 0.1466 | 0.6105 | 0.7813 |
| No log | 0.7978 | 146 | 0.6146 | 0.2745 | 0.6146 | 0.7840 |
| No log | 0.8087 | 148 | 0.6224 | 0.5065 | 0.6224 | 0.7889 |
| No log | 0.8197 | 150 | 0.5854 | 0.4996 | 0.5854 | 0.7651 |
| No log | 0.8306 | 152 | 0.5275 | 0.3048 | 0.5275 | 0.7263 |
| No log | 0.8415 | 154 | 0.6258 | 0.2906 | 0.6258 | 0.7911 |
| No log | 0.8525 | 156 | 0.6752 | 0.2933 | 0.6752 | 0.8217 |
| No log | 0.8634 | 158 | 0.5665 | 0.2549 | 0.5665 | 0.7526 |
| No log | 0.8743 | 160 | 0.5561 | 0.3568 | 0.5561 | 0.7457 |
| No log | 0.8852 | 162 | 0.5938 | 0.4459 | 0.5938 | 0.7706 |
| No log | 0.8962 | 164 | 0.6049 | 0.3657 | 0.6049 | 0.7777 |
| No log | 0.9071 | 166 | 0.6631 | 0.1406 | 0.6631 | 0.8143 |
| No log | 0.9180 | 168 | 0.7038 | 0.1491 | 0.7038 | 0.8389 |
| No log | 0.9290 | 170 | 0.6221 | 0.1680 | 0.6221 | 0.7887 |
| No log | 0.9399 | 172 | 0.5553 | 0.2295 | 0.5553 | 0.7452 |
| No log | 0.9508 | 174 | 0.5432 | 0.2659 | 0.5432 | 0.7370 |
| No log | 0.9617 | 176 | 0.5287 | 0.2843 | 0.5287 | 0.7271 |
| No log | 0.9727 | 178 | 0.5256 | 0.3102 | 0.5256 | 0.7250 |
| No log | 0.9836 | 180 | 0.5196 | 0.3430 | 0.5196 | 0.7209 |
| No log | 0.9945 | 182 | 0.5304 | 0.3718 | 0.5304 | 0.7283 |
| No log | 1.0055 | 184 | 0.5403 | 0.3780 | 0.5403 | 0.7351 |
| No log | 1.0164 | 186 | 0.5235 | 0.4370 | 0.5235 | 0.7235 |
| No log | 1.0273 | 188 | 0.4944 | 0.4900 | 0.4944 | 0.7032 |
| No log | 1.0383 | 190 | 0.4662 | 0.5357 | 0.4662 | 0.6828 |
| No log | 1.0492 | 192 | 0.4610 | 0.5385 | 0.4610 | 0.6789 |
| No log | 1.0601 | 194 | 0.4702 | 0.5333 | 0.4702 | 0.6857 |
| No log | 1.0710 | 196 | 0.5005 | 0.5630 | 0.5005 | 0.7074 |
| No log | 1.0820 | 198 | 0.6166 | 0.5571 | 0.6166 | 0.7852 |
| No log | 1.0929 | 200 | 0.6454 | 0.5377 | 0.6454 | 0.8033 |
| No log | 1.1038 | 202 | 0.5709 | 0.5229 | 0.5709 | 0.7556 |
| No log | 1.1148 | 204 | 0.5619 | 0.4794 | 0.5619 | 0.7496 |
| No log | 1.1257 | 206 | 0.5985 | 0.5233 | 0.5985 | 0.7736 |
| No log | 1.1366 | 208 | 0.6531 | 0.5497 | 0.6531 | 0.8081 |
| No log | 1.1475 | 210 | 0.5850 | 0.5776 | 0.5850 | 0.7649 |
| No log | 1.1585 | 212 | 0.5471 | 0.5492 | 0.5471 | 0.7397 |
| No log | 1.1694 | 214 | 0.5583 | 0.5453 | 0.5583 | 0.7472 |
| No log | 1.1803 | 216 | 0.5010 | 0.4413 | 0.5010 | 0.7078 |
| No log | 1.1913 | 218 | 0.4814 | 0.4341 | 0.4814 | 0.6939 |
| No log | 1.2022 | 220 | 0.4597 | 0.5142 | 0.4597 | 0.6780 |
| No log | 1.2131 | 222 | 0.4564 | 0.5443 | 0.4564 | 0.6755 |
| No log | 1.2240 | 224 | 0.4444 | 0.5184 | 0.4444 | 0.6667 |
| No log | 1.2350 | 226 | 0.4441 | 0.5238 | 0.4441 | 0.6664 |
| No log | 1.2459 | 228 | 0.4460 | 0.5804 | 0.4460 | 0.6679 |
| No log | 1.2568 | 230 | 0.4810 | 0.5931 | 0.4810 | 0.6936 |
| No log | 1.2678 | 232 | 0.5246 | 0.5743 | 0.5246 | 0.7243 |
| No log | 1.2787 | 234 | 0.5335 | 0.5358 | 0.5335 | 0.7304 |
| No log | 1.2896 | 236 | 0.5301 | 0.4856 | 0.5301 | 0.7281 |
| No log | 1.3005 | 238 | 0.5644 | 0.5115 | 0.5644 | 0.7513 |
| No log | 1.3115 | 240 | 0.4750 | 0.5368 | 0.4750 | 0.6892 |
| No log | 1.3224 | 242 | 0.5530 | 0.4128 | 0.5530 | 0.7437 |
| No log | 1.3333 | 244 | 0.4930 | 0.4574 | 0.4930 | 0.7021 |
| No log | 1.3443 | 246 | 0.4788 | 0.6031 | 0.4788 | 0.6919 |
| No log | 1.3552 | 248 | 0.6167 | 0.5881 | 0.6167 | 0.7853 |
| No log | 1.3661 | 250 | 0.5248 | 0.6057 | 0.5248 | 0.7244 |
| No log | 1.3770 | 252 | 0.4988 | 0.4176 | 0.4988 | 0.7062 |
| No log | 1.3880 | 254 | 0.5818 | 0.3303 | 0.5818 | 0.7627 |
| No log | 1.3989 | 256 | 0.5330 | 0.3715 | 0.5330 | 0.7301 |
| No log | 1.4098 | 258 | 0.5061 | 0.5118 | 0.5061 | 0.7114 |
| No log | 1.4208 | 260 | 0.5194 | 0.5404 | 0.5194 | 0.7207 |
| No log | 1.4317 | 262 | 0.4773 | 0.5476 | 0.4773 | 0.6908 |
| No log | 1.4426 | 264 | 0.4499 | 0.5075 | 0.4499 | 0.6707 |
| No log | 1.4536 | 266 | 0.4315 | 0.5509 | 0.4315 | 0.6569 |
| No log | 1.4645 | 268 | 0.4336 | 0.5880 | 0.4336 | 0.6585 |
| No log | 1.4754 | 270 | 0.4701 | 0.5751 | 0.4701 | 0.6856 |
| No log | 1.4863 | 272 | 0.4992 | 0.5476 | 0.4992 | 0.7065 |
| No log | 1.4973 | 274 | 0.5184 | 0.4787 | 0.5184 | 0.7200 |
| No log | 1.5082 | 276 | 0.5729 | 0.4987 | 0.5729 | 0.7569 |
| No log | 1.5191 | 278 | 0.6270 | 0.4721 | 0.6270 | 0.7918 |
| No log | 1.5301 | 280 | 0.6072 | 0.4658 | 0.6072 | 0.7792 |
| No log | 1.5410 | 282 | 0.5376 | 0.4273 | 0.5376 | 0.7332 |
| No log | 1.5519 | 284 | 0.4989 | 0.4344 | 0.4989 | 0.7063 |
| No log | 1.5628 | 286 | 0.4536 | 0.5314 | 0.4536 | 0.6735 |
| No log | 1.5738 | 288 | 0.5414 | 0.5869 | 0.5414 | 0.7358 |
| No log | 1.5847 | 290 | 0.5300 | 0.6178 | 0.5300 | 0.7280 |
| No log | 1.5956 | 292 | 0.4025 | 0.5937 | 0.4025 | 0.6344 |
| No log | 1.6066 | 294 | 0.4080 | 0.5399 | 0.4080 | 0.6387 |
| No log | 1.6175 | 296 | 0.4256 | 0.6221 | 0.4256 | 0.6524 |
| No log | 1.6284 | 298 | 0.4739 | 0.6341 | 0.4739 | 0.6884 |
| No log | 1.6393 | 300 | 0.4071 | 0.6012 | 0.4071 | 0.6380 |
| No log | 1.6503 | 302 | 0.4014 | 0.5598 | 0.4014 | 0.6336 |
| No log | 1.6612 | 304 | 0.4180 | 0.6113 | 0.4180 | 0.6465 |
| No log | 1.6721 | 306 | 0.4969 | 0.6305 | 0.4969 | 0.7049 |
| No log | 1.6831 | 308 | 0.4780 | 0.6410 | 0.4780 | 0.6914 |
| No log | 1.6940 | 310 | 0.4035 | 0.5801 | 0.4035 | 0.6352 |
| No log | 1.7049 | 312 | 0.5961 | 0.4179 | 0.5961 | 0.7720 |
| No log | 1.7158 | 314 | 0.5675 | 0.4326 | 0.5675 | 0.7533 |
| No log | 1.7268 | 316 | 0.4057 | 0.5940 | 0.4057 | 0.6370 |
| No log | 1.7377 | 318 | 0.5059 | 0.6366 | 0.5059 | 0.7113 |
| No log | 1.7486 | 320 | 0.6015 | 0.5919 | 0.6015 | 0.7756 |
| No log | 1.7596 | 322 | 0.5197 | 0.6216 | 0.5197 | 0.7209 |
| No log | 1.7705 | 324 | 0.4682 | 0.5310 | 0.4682 | 0.6842 |
| No log | 1.7814 | 326 | 0.4665 | 0.4939 | 0.4665 | 0.6830 |
| No log | 1.7923 | 328 | 0.4746 | 0.5552 | 0.4746 | 0.6889 |
| No log | 1.8033 | 330 | 0.5345 | 0.6037 | 0.5345 | 0.7311 |
| No log | 1.8142 | 332 | 0.5584 | 0.6274 | 0.5584 | 0.7473 |
| No log | 1.8251 | 334 | 0.4737 | 0.6571 | 0.4737 | 0.6882 |
| No log | 1.8361 | 336 | 0.4033 | 0.5539 | 0.4033 | 0.6350 |
| No log | 1.8470 | 338 | 0.4598 | 0.4858 | 0.4598 | 0.6781 |
| No log | 1.8579 | 340 | 0.4197 | 0.5420 | 0.4197 | 0.6478 |
| No log | 1.8689 | 342 | 0.4212 | 0.6235 | 0.4212 | 0.6490 |
| No log | 1.8798 | 344 | 0.4200 | 0.6371 | 0.4200 | 0.6481 |
| No log | 1.8907 | 346 | 0.3990 | 0.5783 | 0.3990 | 0.6316 |
| No log | 1.9016 | 348 | 0.4226 | 0.5218 | 0.4226 | 0.6501 |
| No log | 1.9126 | 350 | 0.4149 | 0.5250 | 0.4149 | 0.6442 |
| No log | 1.9235 | 352 | 0.4351 | 0.5940 | 0.4351 | 0.6596 |
| No log | 1.9344 | 354 | 0.5040 | 0.6247 | 0.5040 | 0.7099 |
| No log | 1.9454 | 356 | 0.4859 | 0.6176 | 0.4859 | 0.6970 |
| No log | 1.9563 | 358 | 0.4065 | 0.5837 | 0.4065 | 0.6376 |
| No log | 1.9672 | 360 | 0.4146 | 0.5273 | 0.4146 | 0.6439 |
| No log | 1.9781 | 362 | 0.4071 | 0.5699 | 0.4071 | 0.6381 |
| No log | 1.9891 | 364 | 0.4827 | 0.6188 | 0.4827 | 0.6948 |
| No log | 2.0 | 366 | 0.7078 | 0.5839 | 0.7078 | 0.8413 |
| No log | 2.0109 | 368 | 0.7318 | 0.5785 | 0.7318 | 0.8554 |
| No log | 2.0219 | 370 | 0.5917 | 0.5791 | 0.5917 | 0.7692 |
| No log | 2.0328 | 372 | 0.4511 | 0.5734 | 0.4511 | 0.6716 |
| No log | 2.0437 | 374 | 0.4450 | 0.5060 | 0.4450 | 0.6671 |
| No log | 2.0546 | 376 | 0.4404 | 0.5194 | 0.4404 | 0.6636 |
| No log | 2.0656 | 378 | 0.4594 | 0.5884 | 0.4594 | 0.6778 |
| No log | 2.0765 | 380 | 0.5087 | 0.5876 | 0.5087 | 0.7132 |
| No log | 2.0874 | 382 | 0.4920 | 0.6001 | 0.4920 | 0.7014 |
| No log | 2.0984 | 384 | 0.4247 | 0.6251 | 0.4247 | 0.6517 |
| No log | 2.1093 | 386 | 0.4035 | 0.5417 | 0.4035 | 0.6352 |
| No log | 2.1202 | 388 | 0.4045 | 0.5479 | 0.4045 | 0.6360 |
| No log | 2.1311 | 390 | 0.4082 | 0.6009 | 0.4082 | 0.6389 |
| No log | 2.1421 | 392 | 0.4057 | 0.6156 | 0.4057 | 0.6369 |
| No log | 2.1530 | 394 | 0.3942 | 0.5717 | 0.3942 | 0.6279 |
| No log | 2.1639 | 396 | 0.3955 | 0.5744 | 0.3955 | 0.6289 |
| No log | 2.1749 | 398 | 0.4179 | 0.6359 | 0.4179 | 0.6464 |
| No log | 2.1858 | 400 | 0.4318 | 0.6241 | 0.4318 | 0.6571 |
| No log | 2.1967 | 402 | 0.4105 | 0.5794 | 0.4105 | 0.6407 |
| No log | 2.2077 | 404 | 0.4181 | 0.6184 | 0.4181 | 0.6466 |
| No log | 2.2186 | 406 | 0.4245 | 0.6411 | 0.4245 | 0.6515 |
| No log | 2.2295 | 408 | 0.4838 | 0.7039 | 0.4838 | 0.6955 |
| No log | 2.2404 | 410 | 0.4563 | 0.6736 | 0.4563 | 0.6755 |
| No log | 2.2514 | 412 | 0.4220 | 0.6546 | 0.4220 | 0.6496 |
| No log | 2.2623 | 414 | 0.4187 | 0.6309 | 0.4187 | 0.6471 |
| No log | 2.2732 | 416 | 0.4298 | 0.6515 | 0.4298 | 0.6556 |
| No log | 2.2842 | 418 | 0.4295 | 0.6683 | 0.4295 | 0.6554 |
| No log | 2.2951 | 420 | 0.4378 | 0.6727 | 0.4378 | 0.6616 |
| No log | 2.3060 | 422 | 0.4848 | 0.7065 | 0.4848 | 0.6963 |
| No log | 2.3169 | 424 | 0.4900 | 0.7102 | 0.4900 | 0.7000 |
| No log | 2.3279 | 426 | 0.4864 | 0.7043 | 0.4864 | 0.6974 |
| No log | 2.3388 | 428 | 0.4254 | 0.6500 | 0.4254 | 0.6522 |
| No log | 2.3497 | 430 | 0.4424 | 0.6649 | 0.4424 | 0.6652 |
| No log | 2.3607 | 432 | 0.4411 | 0.6596 | 0.4411 | 0.6642 |
| No log | 2.3716 | 434 | 0.4951 | 0.6895 | 0.4951 | 0.7037 |
| No log | 2.3825 | 436 | 0.4736 | 0.6470 | 0.4736 | 0.6882 |
| No log | 2.3934 | 438 | 0.3945 | 0.6360 | 0.3945 | 0.6281 |
| No log | 2.4044 | 440 | 0.3849 | 0.6109 | 0.3849 | 0.6204 |
| No log | 2.4153 | 442 | 0.3977 | 0.6743 | 0.3977 | 0.6306 |
| No log | 2.4262 | 444 | 0.4673 | 0.6772 | 0.4673 | 0.6836 |
| No log | 2.4372 | 446 | 0.4191 | 0.6863 | 0.4191 | 0.6474 |
| No log | 2.4481 | 448 | 0.3829 | 0.5632 | 0.3829 | 0.6188 |
| No log | 2.4590 | 450 | 0.4607 | 0.4736 | 0.4607 | 0.6788 |
| No log | 2.4699 | 452 | 0.4604 | 0.4519 | 0.4604 | 0.6786 |
| No log | 2.4809 | 454 | 0.4117 | 0.5257 | 0.4117 | 0.6417 |
| No log | 2.4918 | 456 | 0.4919 | 0.6471 | 0.4919 | 0.7013 |
| No log | 2.5027 | 458 | 0.6115 | 0.6222 | 0.6115 | 0.7820 |
| No log | 2.5137 | 460 | 0.5588 | 0.6357 | 0.5588 | 0.7475 |
| No log | 2.5246 | 462 | 0.4183 | 0.6196 | 0.4183 | 0.6467 |
| No log | 2.5355 | 464 | 0.3884 | 0.6091 | 0.3884 | 0.6232 |
| No log | 2.5464 | 466 | 0.4146 | 0.6658 | 0.4146 | 0.6439 |
| No log | 2.5574 | 468 | 0.4609 | 0.6953 | 0.4609 | 0.6789 |
| No log | 2.5683 | 470 | 0.4937 | 0.7030 | 0.4937 | 0.7026 |
| No log | 2.5792 | 472 | 0.4165 | 0.7009 | 0.4165 | 0.6453 |
| No log | 2.5902 | 474 | 0.4035 | 0.6975 | 0.4035 | 0.6353 |
| No log | 2.6011 | 476 | 0.3784 | 0.6448 | 0.3784 | 0.6152 |
| No log | 2.6120 | 478 | 0.3827 | 0.6170 | 0.3827 | 0.6186 |
| No log | 2.6230 | 480 | 0.3785 | 0.6373 | 0.3785 | 0.6152 |
| No log | 2.6339 | 482 | 0.3974 | 0.6763 | 0.3974 | 0.6304 |
| No log | 2.6448 | 484 | 0.4128 | 0.6699 | 0.4128 | 0.6425 |
| No log | 2.6557 | 486 | 0.4248 | 0.6726 | 0.4248 | 0.6518 |
| No log | 2.6667 | 488 | 0.4199 | 0.6384 | 0.4199 | 0.6480 |
| No log | 2.6776 | 490 | 0.4513 | 0.6759 | 0.4513 | 0.6718 |
| No log | 2.6885 | 492 | 0.4844 | 0.6906 | 0.4844 | 0.6960 |
| No log | 2.6995 | 494 | 0.5336 | 0.7130 | 0.5336 | 0.7305 |
| No log | 2.7104 | 496 | 0.4151 | 0.6785 | 0.4151 | 0.6443 |
| No log | 2.7213 | 498 | 0.3709 | 0.5880 | 0.3709 | 0.6090 |
| 0.607 | 2.7322 | 500 | 0.3684 | 0.6001 | 0.3684 | 0.6070 |
| 0.607 | 2.7432 | 502 | 0.3772 | 0.6561 | 0.3772 | 0.6142 |
| 0.607 | 2.7541 | 504 | 0.3863 | 0.6633 | 0.3863 | 0.6216 |
| 0.607 | 2.7650 | 506 | 0.4477 | 0.6714 | 0.4477 | 0.6691 |
| 0.607 | 2.7760 | 508 | 0.4170 | 0.6614 | 0.4170 | 0.6457 |
| 0.607 | 2.7869 | 510 | 0.4144 | 0.6592 | 0.4144 | 0.6438 |
| 0.607 | 2.7978 | 512 | 0.4140 | 0.6381 | 0.4140 | 0.6434 |
| 0.607 | 2.8087 | 514 | 0.4222 | 0.6254 | 0.4222 | 0.6498 |
| 0.607 | 2.8197 | 516 | 0.3975 | 0.6220 | 0.3975 | 0.6305 |
| 0.607 | 2.8306 | 518 | 0.4390 | 0.6677 | 0.4390 | 0.6626 |
| 0.607 | 2.8415 | 520 | 0.5226 | 0.6969 | 0.5226 | 0.7229 |
| 0.607 | 2.8525 | 522 | 0.4767 | 0.6591 | 0.4767 | 0.6904 |
| 0.607 | 2.8634 | 524 | 0.4297 | 0.5988 | 0.4297 | 0.6555 |
| 0.607 | 2.8743 | 526 | 0.4443 | 0.6188 | 0.4443 | 0.6666 |
| 0.607 | 2.8852 | 528 | 0.4930 | 0.6881 | 0.4930 | 0.7021 |
| 0.607 | 2.8962 | 530 | 0.4471 | 0.6759 | 0.4471 | 0.6686 |
| 0.607 | 2.9071 | 532 | 0.3930 | 0.5952 | 0.3930 | 0.6269 |
| 0.607 | 2.9180 | 534 | 0.4175 | 0.5611 | 0.4175 | 0.6461 |
| 0.607 | 2.9290 | 536 | 0.3994 | 0.5988 | 0.3994 | 0.6320 |
| 0.607 | 2.9399 | 538 | 0.4489 | 0.7035 | 0.4489 | 0.6700 |
| 0.607 | 2.9508 | 540 | 0.5295 | 0.7269 | 0.5295 | 0.7277 |
| 0.607 | 2.9617 | 542 | 0.5152 | 0.7101 | 0.5152 | 0.7178 |
| 0.607 | 2.9727 | 544 | 0.4430 | 0.6825 | 0.4430 | 0.6656 |
| 0.607 | 2.9836 | 546 | 0.3930 | 0.5991 | 0.3930 | 0.6269 |
| 0.607 | 2.9945 | 548 | 0.3973 | 0.5746 | 0.3973 | 0.6303 |
| 0.607 | 3.0055 | 550 | 0.3952 | 0.6510 | 0.3952 | 0.6286 |
| 0.607 | 3.0164 | 552 | 0.4923 | 0.7022 | 0.4923 | 0.7017 |
| 0.607 | 3.0273 | 554 | 0.6044 | 0.7124 | 0.6044 | 0.7774 |
| 0.607 | 3.0383 | 556 | 0.5509 | 0.7097 | 0.5509 | 0.7422 |
| 0.607 | 3.0492 | 558 | 0.4189 | 0.6357 | 0.4189 | 0.6472 |
| 0.607 | 3.0601 | 560 | 0.3894 | 0.5636 | 0.3894 | 0.6240 |
| 0.607 | 3.0710 | 562 | 0.3822 | 0.5907 | 0.3822 | 0.6183 |
| 0.607 | 3.0820 | 564 | 0.3771 | 0.6326 | 0.3771 | 0.6141 |
| 0.607 | 3.0929 | 566 | 0.3787 | 0.6395 | 0.3787 | 0.6154 |
| 0.607 | 3.1038 | 568 | 0.3792 | 0.6340 | 0.3792 | 0.6158 |
| 0.607 | 3.1148 | 570 | 0.3834 | 0.6278 | 0.3834 | 0.6192 |
| 0.607 | 3.1257 | 572 | 0.3847 | 0.6427 | 0.3847 | 0.6202 |
| 0.607 | 3.1366 | 574 | 0.3863 | 0.6368 | 0.3863 | 0.6215 |
| 0.607 | 3.1475 | 576 | 0.3971 | 0.6653 | 0.3971 | 0.6302 |
| 0.607 | 3.1585 | 578 | 0.4385 | 0.7050 | 0.4385 | 0.6622 |
| 0.607 | 3.1694 | 580 | 0.4307 | 0.6727 | 0.4307 | 0.6563 |
| 0.607 | 3.1803 | 582 | 0.4245 | 0.6519 | 0.4245 | 0.6515 |
| 0.607 | 3.1913 | 584 | 0.4061 | 0.6455 | 0.4061 | 0.6373 |
| 0.607 | 3.2022 | 586 | 0.4596 | 0.6835 | 0.4596 | 0.6779 |
| 0.607 | 3.2131 | 588 | 0.5658 | 0.7147 | 0.5658 | 0.7522 |
| 0.607 | 3.2240 | 590 | 0.5216 | 0.7017 | 0.5216 | 0.7222 |
| 0.607 | 3.2350 | 592 | 0.4061 | 0.6832 | 0.4061 | 0.6372 |
| 0.607 | 3.2459 | 594 | 0.3829 | 0.6158 | 0.3829 | 0.6188 |
| 0.607 | 3.2568 | 596 | 0.3852 | 0.6522 | 0.3852 | 0.6206 |
| 0.607 | 3.2678 | 598 | 0.3973 | 0.6769 | 0.3973 | 0.6303 |
| 0.607 | 3.2787 | 600 | 0.3808 | 0.6280 | 0.3808 | 0.6171 |
| 0.607 | 3.2896 | 602 | 0.3861 | 0.5614 | 0.3861 | 0.6214 |
| 0.607 | 3.3005 | 604 | 0.3780 | 0.5874 | 0.3780 | 0.6149 |
| 0.607 | 3.3115 | 606 | 0.4420 | 0.6650 | 0.4420 | 0.6649 |
| 0.607 | 3.3224 | 608 | 0.4948 | 0.6662 | 0.4948 | 0.7034 |
| 0.607 | 3.3333 | 610 | 0.4449 | 0.6152 | 0.4449 | 0.6670 |
| 0.607 | 3.3443 | 612 | 0.4246 | 0.5565 | 0.4246 | 0.6516 |
| 0.607 | 3.3552 | 614 | 0.4310 | 0.6360 | 0.4310 | 0.6565 |
| 0.607 | 3.3661 | 616 | 0.4138 | 0.6589 | 0.4138 | 0.6433 |
| 0.607 | 3.3770 | 618 | 0.4347 | 0.7003 | 0.4347 | 0.6593 |
| 0.607 | 3.3880 | 620 | 0.3917 | 0.6808 | 0.3917 | 0.6258 |
| 0.607 | 3.3989 | 622 | 0.5135 | 0.5013 | 0.5135 | 0.7166 |
| 0.607 | 3.4098 | 624 | 0.6443 | 0.4388 | 0.6443 | 0.8027 |
| 0.607 | 3.4208 | 626 | 0.4526 | 0.5425 | 0.4526 | 0.6727 |
| 0.607 | 3.4317 | 628 | 0.4097 | 0.6892 | 0.4097 | 0.6401 |
| 0.607 | 3.4426 | 630 | 0.4461 | 0.6864 | 0.4461 | 0.6679 |
| 0.607 | 3.4536 | 632 | 0.3879 | 0.6434 | 0.3879 | 0.6228 |
| 0.607 | 3.4645 | 634 | 0.4188 | 0.5163 | 0.4188 | 0.6472 |
| 0.607 | 3.4754 | 636 | 0.4235 | 0.5117 | 0.4235 | 0.6508 |
| 0.607 | 3.4863 | 638 | 0.3996 | 0.5794 | 0.3996 | 0.6322 |
| 0.607 | 3.4973 | 640 | 0.5455 | 0.7092 | 0.5455 | 0.7386 |
| 0.607 | 3.5082 | 642 | 0.7077 | 0.6892 | 0.7077 | 0.8412 |
| 0.607 | 3.5191 | 644 | 0.6435 | 0.6944 | 0.6435 | 0.8022 |
| 0.607 | 3.5301 | 646 | 0.4782 | 0.6520 | 0.4782 | 0.6915 |
| 0.607 | 3.5410 | 648 | 0.4269 | 0.4913 | 0.4269 | 0.6533 |
| 0.607 | 3.5519 | 650 | 0.4246 | 0.4695 | 0.4246 | 0.6516 |
| 0.607 | 3.5628 | 652 | 0.4115 | 0.5998 | 0.4115 | 0.6415 |
| 0.607 | 3.5738 | 654 | 0.4597 | 0.6877 | 0.4597 | 0.6780 |
| 0.607 | 3.5847 | 656 | 0.4340 | 0.6687 | 0.4340 | 0.6588 |
| 0.607 | 3.5956 | 658 | 0.3930 | 0.6341 | 0.3930 | 0.6269 |
| 0.607 | 3.6066 | 660 | 0.3879 | 0.6562 | 0.3879 | 0.6228 |
| 0.607 | 3.6175 | 662 | 0.4091 | 0.7019 | 0.4091 | 0.6396 |
| 0.607 | 3.6284 | 664 | 0.3864 | 0.6803 | 0.3864 | 0.6216 |
| 0.607 | 3.6393 | 666 | 0.3978 | 0.5652 | 0.3978 | 0.6307 |
| 0.607 | 3.6503 | 668 | 0.3905 | 0.5819 | 0.3905 | 0.6249 |
| 0.607 | 3.6612 | 670 | 0.3886 | 0.6652 | 0.3886 | 0.6234 |
| 0.607 | 3.6721 | 672 | 0.4489 | 0.7227 | 0.4489 | 0.6700 |
| 0.607 | 3.6831 | 674 | 0.4531 | 0.7178 | 0.4531 | 0.6732 |
| 0.607 | 3.6940 | 676 | 0.4081 | 0.6671 | 0.4081 | 0.6388 |
| 0.607 | 3.7049 | 678 | 0.4041 | 0.6518 | 0.4041 | 0.6357 |
| 0.607 | 3.7158 | 680 | 0.4603 | 0.6997 | 0.4603 | 0.6785 |
| 0.607 | 3.7268 | 682 | 0.4787 | 0.6952 | 0.4787 | 0.6919 |
| 0.607 | 3.7377 | 684 | 0.4338 | 0.6872 | 0.4338 | 0.6586 |
| 0.607 | 3.7486 | 686 | 0.4440 | 0.6928 | 0.4440 | 0.6663 |
| 0.607 | 3.7596 | 688 | 0.4232 | 0.6878 | 0.4232 | 0.6506 |
| 0.607 | 3.7705 | 690 | 0.4355 | 0.7064 | 0.4355 | 0.6599 |
| 0.607 | 3.7814 | 692 | 0.3944 | 0.6410 | 0.3944 | 0.6280 |
| 0.607 | 3.7923 | 694 | 0.3913 | 0.5901 | 0.3913 | 0.6255 |
| 0.607 | 3.8033 | 696 | 0.3891 | 0.6045 | 0.3891 | 0.6238 |
| 0.607 | 3.8142 | 698 | 0.3933 | 0.6621 | 0.3933 | 0.6272 |
| 0.607 | 3.8251 | 700 | 0.4423 | 0.7134 | 0.4423 | 0.6651 |
| 0.607 | 3.8361 | 702 | 0.4065 | 0.6602 | 0.4065 | 0.6376 |
| 0.607 | 3.8470 | 704 | 0.3859 | 0.6226 | 0.3859 | 0.6212 |
| 0.607 | 3.8579 | 706 | 0.4044 | 0.5417 | 0.4044 | 0.6359 |
| 0.607 | 3.8689 | 708 | 0.3895 | 0.6043 | 0.3895 | 0.6241 |
| 0.607 | 3.8798 | 710 | 0.4644 | 0.7039 | 0.4644 | 0.6815 |
| 0.607 | 3.8907 | 712 | 0.5670 | 0.7059 | 0.5670 | 0.7530 |
| 0.607 | 3.9016 | 714 | 0.5141 | 0.7080 | 0.5141 | 0.7170 |
| 0.607 | 3.9126 | 716 | 0.4022 | 0.6472 | 0.4022 | 0.6342 |
| 0.607 | 3.9235 | 718 | 0.3812 | 0.6223 | 0.3812 | 0.6174 |
| 0.607 | 3.9344 | 720 | 0.3800 | 0.6402 | 0.3800 | 0.6165 |
| 0.607 | 3.9454 | 722 | 0.4080 | 0.6947 | 0.4080 | 0.6388 |
| 0.607 | 3.9563 | 724 | 0.4075 | 0.6923 | 0.4075 | 0.6384 |
| 0.607 | 3.9672 | 726 | 0.3904 | 0.6554 | 0.3904 | 0.6248 |
| 0.607 | 3.9781 | 728 | 0.3910 | 0.6419 | 0.3910 | 0.6253 |
| 0.607 | 3.9891 | 730 | 0.4150 | 0.6495 | 0.4150 | 0.6442 |
| 0.607 | 4.0 | 732 | 0.4696 | 0.7022 | 0.4696 | 0.6853 |
| 0.607 | 4.0109 | 734 | 0.4725 | 0.6901 | 0.4725 | 0.6874 |
| 0.607 | 4.0219 | 736 | 0.4559 | 0.6708 | 0.4559 | 0.6752 |
| 0.607 | 4.0328 | 738 | 0.4124 | 0.6242 | 0.4124 | 0.6422 |
| 0.607 | 4.0437 | 740 | 0.4292 | 0.6620 | 0.4292 | 0.6552 |
| 0.607 | 4.0546 | 742 | 0.4520 | 0.6842 | 0.4520 | 0.6723 |
| 0.607 | 4.0656 | 744 | 0.4590 | 0.7001 | 0.4590 | 0.6775 |
| 0.607 | 4.0765 | 746 | 0.3923 | 0.6585 | 0.3923 | 0.6263 |
| 0.607 | 4.0874 | 748 | 0.3881 | 0.5867 | 0.3881 | 0.6230 |
| 0.607 | 4.0984 | 750 | 0.3898 | 0.6338 | 0.3898 | 0.6243 |
| 0.607 | 4.1093 | 752 | 0.4601 | 0.7051 | 0.4601 | 0.6783 |
| 0.607 | 4.1202 | 754 | 0.5707 | 0.7124 | 0.5707 | 0.7554 |
| 0.607 | 4.1311 | 756 | 0.5407 | 0.7193 | 0.5407 | 0.7353 |
| 0.607 | 4.1421 | 758 | 0.4473 | 0.6954 | 0.4473 | 0.6688 |
| 0.607 | 4.1530 | 760 | 0.4335 | 0.6742 | 0.4335 | 0.6584 |
| 0.607 | 4.1639 | 762 | 0.4578 | 0.7089 | 0.4578 | 0.6766 |
| 0.607 | 4.1749 | 764 | 0.4706 | 0.7117 | 0.4706 | 0.6860 |
| 0.607 | 4.1858 | 766 | 0.4105 | 0.6800 | 0.4105 | 0.6407 |
| 0.607 | 4.1967 | 768 | 0.3975 | 0.6577 | 0.3975 | 0.6305 |
| 0.607 | 4.2077 | 770 | 0.3907 | 0.6524 | 0.3907 | 0.6251 |
| 0.607 | 4.2186 | 772 | 0.4313 | 0.7087 | 0.4313 | 0.6567 |
| 0.607 | 4.2295 | 774 | 0.4147 | 0.6954 | 0.4147 | 0.6440 |
| 0.607 | 4.2404 | 776 | 0.3894 | 0.6156 | 0.3894 | 0.6240 |
| 0.607 | 4.2514 | 778 | 0.3928 | 0.6310 | 0.3928 | 0.6267 |
| 0.607 | 4.2623 | 780 | 0.4224 | 0.6944 | 0.4224 | 0.6499 |
| 0.607 | 4.2732 | 782 | 0.5036 | 0.7232 | 0.5036 | 0.7096 |
| 0.607 | 4.2842 | 784 | 0.4738 | 0.7135 | 0.4738 | 0.6883 |
| 0.607 | 4.2951 | 786 | 0.4090 | 0.6499 | 0.4090 | 0.6395 |
| 0.607 | 4.3060 | 788 | 0.3925 | 0.5809 | 0.3925 | 0.6265 |
| 0.607 | 4.3169 | 790 | 0.3883 | 0.6072 | 0.3883 | 0.6231 |
| 0.607 | 4.3279 | 792 | 0.4304 | 0.6704 | 0.4304 | 0.6561 |
| 0.607 | 4.3388 | 794 | 0.4758 | 0.7198 | 0.4758 | 0.6898 |
| 0.607 | 4.3497 | 796 | 0.4395 | 0.6953 | 0.4395 | 0.6629 |
| 0.607 | 4.3607 | 798 | 0.3904 | 0.6172 | 0.3904 | 0.6248 |
| 0.607 | 4.3716 | 800 | 0.3941 | 0.5754 | 0.3941 | 0.6278 |
| 0.607 | 4.3825 | 802 | 0.3875 | 0.6302 | 0.3875 | 0.6225 |
| 0.607 | 4.3934 | 804 | 0.4530 | 0.7273 | 0.4530 | 0.6730 |
| 0.607 | 4.4044 | 806 | 0.4927 | 0.7342 | 0.4927 | 0.7019 |
| 0.607 | 4.4153 | 808 | 0.4266 | 0.6905 | 0.4266 | 0.6531 |
| 0.607 | 4.4262 | 810 | 0.3930 | 0.6281 | 0.3930 | 0.6269 |
| 0.607 | 4.4372 | 812 | 0.3883 | 0.6272 | 0.3883 | 0.6232 |
| 0.607 | 4.4481 | 814 | 0.4272 | 0.6993 | 0.4272 | 0.6536 |
| 0.607 | 4.4590 | 816 | 0.4875 | 0.7405 | 0.4875 | 0.6982 |
| 0.607 | 4.4699 | 818 | 0.4591 | 0.7352 | 0.4591 | 0.6776 |
| 0.607 | 4.4809 | 820 | 0.4082 | 0.6844 | 0.4082 | 0.6389 |
| 0.607 | 4.4918 | 822 | 0.3852 | 0.6162 | 0.3852 | 0.6206 |
| 0.607 | 4.5027 | 824 | 0.3863 | 0.6337 | 0.3863 | 0.6216 |
| 0.607 | 4.5137 | 826 | 0.4065 | 0.6743 | 0.4065 | 0.6376 |
| 0.607 | 4.5246 | 828 | 0.4270 | 0.6945 | 0.4270 | 0.6534 |
| 0.607 | 4.5355 | 830 | 0.4097 | 0.6740 | 0.4097 | 0.6401 |
| 0.607 | 4.5464 | 832 | 0.4097 | 0.6568 | 0.4097 | 0.6401 |
| 0.607 | 4.5574 | 834 | 0.4218 | 0.6730 | 0.4218 | 0.6495 |
| 0.607 | 4.5683 | 836 | 0.4475 | 0.6717 | 0.4475 | 0.6689 |
| 0.607 | 4.5792 | 838 | 0.4434 | 0.6664 | 0.4434 | 0.6659 |
| 0.607 | 4.5902 | 840 | 0.4420 | 0.6722 | 0.4420 | 0.6648 |
| 0.607 | 4.6011 | 842 | 0.4006 | 0.6141 | 0.4006 | 0.6329 |
| 0.607 | 4.6120 | 844 | 0.3941 | 0.6079 | 0.3941 | 0.6278 |
| 0.607 | 4.6230 | 846 | 0.4142 | 0.6932 | 0.4142 | 0.6436 |
| 0.607 | 4.6339 | 848 | 0.4404 | 0.6876 | 0.4404 | 0.6636 |
| 0.607 | 4.6448 | 850 | 0.4210 | 0.6736 | 0.4210 | 0.6489 |
| 0.607 | 4.6557 | 852 | 0.4061 | 0.5875 | 0.4061 | 0.6373 |
| 0.607 | 4.6667 | 854 | 0.4138 | 0.5792 | 0.4138 | 0.6433 |
| 0.607 | 4.6776 | 856 | 0.4460 | 0.6530 | 0.4460 | 0.6678 |
| 0.607 | 4.6885 | 858 | 0.5358 | 0.7033 | 0.5358 | 0.7320 |
| 0.607 | 4.6995 | 860 | 0.5115 | 0.7053 | 0.5115 | 0.7152 |
| 0.607 | 4.7104 | 862 | 0.4208 | 0.6442 | 0.4208 | 0.6487 |
| 0.607 | 4.7213 | 864 | 0.4039 | 0.6006 | 0.4039 | 0.6355 |
| 0.607 | 4.7322 | 866 | 0.3984 | 0.6099 | 0.3984 | 0.6312 |
| 0.607 | 4.7432 | 868 | 0.4098 | 0.6658 | 0.4098 | 0.6402 |
| 0.607 | 4.7541 | 870 | 0.4521 | 0.7137 | 0.4521 | 0.6724 |
| 0.607 | 4.7650 | 872 | 0.4410 | 0.7141 | 0.4410 | 0.6640 |
| 0.607 | 4.7760 | 874 | 0.4126 | 0.6622 | 0.4126 | 0.6423 |
| 0.607 | 4.7869 | 876 | 0.3992 | 0.6234 | 0.3992 | 0.6318 |
| 0.607 | 4.7978 | 878 | 0.3970 | 0.6267 | 0.3970 | 0.6301 |
| 0.607 | 4.8087 | 880 | 0.4332 | 0.6711 | 0.4332 | 0.6582 |
| 0.607 | 4.8197 | 882 | 0.5048 | 0.7190 | 0.5048 | 0.7105 |
| 0.607 | 4.8306 | 884 | 0.4796 | 0.7010 | 0.4796 | 0.6925 |
| 0.607 | 4.8415 | 886 | 0.4306 | 0.6482 | 0.4306 | 0.6562 |
| 0.607 | 4.8525 | 888 | 0.4131 | 0.6232 | 0.4131 | 0.6428 |
| 0.607 | 4.8634 | 890 | 0.4270 | 0.6525 | 0.4270 | 0.6534 |
| 0.607 | 4.8743 | 892 | 0.4205 | 0.6597 | 0.4205 | 0.6485 |
| 0.607 | 4.8852 | 894 | 0.4302 | 0.6630 | 0.4302 | 0.6559 |
| 0.607 | 4.8962 | 896 | 0.4294 | 0.6730 | 0.4294 | 0.6553 |
| 0.607 | 4.9071 | 898 | 0.4071 | 0.6492 | 0.4071 | 0.6380 |
| 0.607 | 4.9180 | 900 | 0.4043 | 0.6471 | 0.4043 | 0.6359 |
| 0.607 | 4.9290 | 902 | 0.4179 | 0.6573 | 0.4179 | 0.6464 |
| 0.607 | 4.9399 | 904 | 0.4568 | 0.6980 | 0.4568 | 0.6758 |
| 0.607 | 4.9508 | 906 | 0.4659 | 0.7002 | 0.4659 | 0.6825 |
| 0.607 | 4.9617 | 908 | 0.4453 | 0.6775 | 0.4453 | 0.6673 |
| 0.607 | 4.9727 | 910 | 0.4177 | 0.6397 | 0.4177 | 0.6463 |
| 0.607 | 4.9836 | 912 | 0.4257 | 0.6499 | 0.4257 | 0.6525 |
| 0.607 | 4.9945 | 914 | 0.4626 | 0.6786 | 0.4626 | 0.6802 |
| 0.607 | 5.0055 | 916 | 0.5056 | 0.6989 | 0.5056 | 0.7110 |
| 0.607 | 5.0164 | 918 | 0.5613 | 0.7235 | 0.5613 | 0.7492 |
| 0.607 | 5.0273 | 920 | 0.4788 | 0.6820 | 0.4788 | 0.6919 |
| 0.607 | 5.0383 | 922 | 0.4226 | 0.6052 | 0.4226 | 0.6501 |
| 0.607 | 5.0492 | 924 | 0.4478 | 0.5633 | 0.4478 | 0.6692 |
| 0.607 | 5.0601 | 926 | 0.4240 | 0.5777 | 0.4240 | 0.6512 |
| 0.607 | 5.0710 | 928 | 0.4287 | 0.6599 | 0.4287 | 0.6548 |
| 0.607 | 5.0820 | 930 | 0.5060 | 0.6993 | 0.5060 | 0.7114 |
| 0.607 | 5.0929 | 932 | 0.5078 | 0.7108 | 0.5078 | 0.7126 |
| 0.607 | 5.1038 | 934 | 0.4337 | 0.6691 | 0.4337 | 0.6586 |
| 0.607 | 5.1148 | 936 | 0.4016 | 0.5957 | 0.4016 | 0.6337 |
| 0.607 | 5.1257 | 938 | 0.4040 | 0.6054 | 0.4040 | 0.6356 |
| 0.607 | 5.1366 | 940 | 0.4332 | 0.6666 | 0.4332 | 0.6582 |
| 0.607 | 5.1475 | 942 | 0.4670 | 0.6822 | 0.4670 | 0.6834 |
| 0.607 | 5.1585 | 944 | 0.4278 | 0.6525 | 0.4278 | 0.6541 |
| 0.607 | 5.1694 | 946 | 0.4020 | 0.5923 | 0.4020 | 0.6340 |
| 0.607 | 5.1803 | 948 | 0.3961 | 0.6029 | 0.3961 | 0.6294 |
| 0.607 | 5.1913 | 950 | 0.4045 | 0.6590 | 0.4045 | 0.6360 |
| 0.607 | 5.2022 | 952 | 0.4005 | 0.6693 | 0.4005 | 0.6329 |
| 0.607 | 5.2131 | 954 | 0.3911 | 0.6353 | 0.3911 | 0.6254 |
| 0.607 | 5.2240 | 956 | 0.3914 | 0.6274 | 0.3914 | 0.6257 |
| 0.607 | 5.2350 | 958 | 0.3976 | 0.6407 | 0.3976 | 0.6305 |
| 0.607 | 5.2459 | 960 | 0.4192 | 0.6575 | 0.4192 | 0.6474 |
| 0.607 | 5.2568 | 962 | 0.4592 | 0.6959 | 0.4592 | 0.6777 |
| 0.607 | 5.2678 | 964 | 0.4932 | 0.7077 | 0.4932 | 0.7023 |
| 0.607 | 5.2787 | 966 | 0.4707 | 0.7012 | 0.4707 | 0.6860 |
| 0.607 | 5.2896 | 968 | 0.4554 | 0.6747 | 0.4554 | 0.6748 |
| 0.607 | 5.3005 | 970 | 0.4224 | 0.6723 | 0.4224 | 0.6499 |
| 0.607 | 5.3115 | 972 | 0.4357 | 0.6838 | 0.4357 | 0.6601 |
| 0.607 | 5.3224 | 974 | 0.4432 | 0.6913 | 0.4432 | 0.6658 |
| 0.607 | 5.3333 | 976 | 0.4324 | 0.7049 | 0.4324 | 0.6576 |
| 0.607 | 5.3443 | 978 | 0.3937 | 0.6489 | 0.3937 | 0.6275 |
| 0.607 | 5.3552 | 980 | 0.3907 | 0.6107 | 0.3907 | 0.6251 |
| 0.607 | 5.3661 | 982 | 0.3873 | 0.6278 | 0.3873 | 0.6223 |
| 0.607 | 5.3770 | 984 | 0.4135 | 0.6763 | 0.4135 | 0.6430 |
| 0.607 | 5.3880 | 986 | 0.4870 | 0.6880 | 0.4870 | 0.6979 |
| 0.607 | 5.3989 | 988 | 0.4905 | 0.6687 | 0.4905 | 0.7003 |
| 0.607 | 5.4098 | 990 | 0.4580 | 0.6340 | 0.4580 | 0.6768 |
| 0.607 | 5.4208 | 992 | 0.4584 | 0.6207 | 0.4584 | 0.6770 |
| 0.607 | 5.4317 | 994 | 0.5051 | 0.6656 | 0.5051 | 0.7107 |
| 0.607 | 5.4426 | 996 | 0.5364 | 0.6903 | 0.5364 | 0.7324 |
| 0.607 | 5.4536 | 998 | 0.4827 | 0.6743 | 0.4827 | 0.6948 |
| 0.1852 | 5.4645 | 1000 | 0.4320 | 0.6493 | 0.4320 | 0.6573 |
| 0.1852 | 5.4754 | 1002 | 0.4247 | 0.6492 | 0.4247 | 0.6517 |
| 0.1852 | 5.4863 | 1004 | 0.4293 | 0.6719 | 0.4293 | 0.6552 |
| 0.1852 | 5.4973 | 1006 | 0.4350 | 0.6798 | 0.4350 | 0.6595 |
| 0.1852 | 5.5082 | 1008 | 0.4308 | 0.6683 | 0.4308 | 0.6564 |
| 0.1852 | 5.5191 | 1010 | 0.4225 | 0.6657 | 0.4225 | 0.6500 |
| 0.1852 | 5.5301 | 1012 | 0.4213 | 0.6571 | 0.4213 | 0.6491 |
| 0.1852 | 5.5410 | 1014 | 0.4207 | 0.6625 | 0.4207 | 0.6486 |
| 0.1852 | 5.5519 | 1016 | 0.4183 | 0.6675 | 0.4183 | 0.6468 |
| 0.1852 | 5.5628 | 1018 | 0.4326 | 0.6985 | 0.4326 | 0.6577 |
| 0.1852 | 5.5738 | 1020 | 0.4429 | 0.6939 | 0.4429 | 0.6655 |
| 0.1852 | 5.5847 | 1022 | 0.4192 | 0.6668 | 0.4192 | 0.6475 |
| 0.1852 | 5.5956 | 1024 | 0.4059 | 0.6216 | 0.4059 | 0.6371 |
| 0.1852 | 5.6066 | 1026 | 0.4191 | 0.6580 | 0.4191 | 0.6474 |
| 0.1852 | 5.6175 | 1028 | 0.4654 | 0.6998 | 0.4654 | 0.6822 |
| 0.1852 | 5.6284 | 1030 | 0.4481 | 0.6937 | 0.4481 | 0.6694 |
| 0.1852 | 5.6393 | 1032 | 0.4136 | 0.6481 | 0.4136 | 0.6431 |
| 0.1852 | 5.6503 | 1034 | 0.4073 | 0.6397 | 0.4073 | 0.6382 |
| 0.1852 | 5.6612 | 1036 | 0.4103 | 0.6494 | 0.4103 | 0.6405 |
| 0.1852 | 5.6721 | 1038 | 0.3972 | 0.6218 | 0.3972 | 0.6302 |
| 0.1852 | 5.6831 | 1040 | 0.3993 | 0.6179 | 0.3993 | 0.6319 |
| 0.1852 | 5.6940 | 1042 | 0.4114 | 0.6459 | 0.4114 | 0.6414 |
| 0.1852 | 5.7049 | 1044 | 0.4259 | 0.6660 | 0.4259 | 0.6526 |
| 0.1852 | 5.7158 | 1046 | 0.4083 | 0.6416 | 0.4083 | 0.6390 |
| 0.1852 | 5.7268 | 1048 | 0.4063 | 0.6158 | 0.4063 | 0.6374 |
| 0.1852 | 5.7377 | 1050 | 0.4115 | 0.6436 | 0.4115 | 0.6415 |
| 0.1852 | 5.7486 | 1052 | 0.4446 | 0.6665 | 0.4446 | 0.6668 |
| 0.1852 | 5.7596 | 1054 | 0.4474 | 0.6730 | 0.4474 | 0.6689 |
| 0.1852 | 5.7705 | 1056 | 0.4329 | 0.6828 | 0.4329 | 0.6580 |
| 0.1852 | 5.7814 | 1058 | 0.4264 | 0.6928 | 0.4264 | 0.6530 |
| 0.1852 | 5.7923 | 1060 | 0.3989 | 0.6541 | 0.3989 | 0.6316 |
| 0.1852 | 5.8033 | 1062 | 0.3933 | 0.6449 | 0.3933 | 0.6271 |
| 0.1852 | 5.8142 | 1064 | 0.3984 | 0.6541 | 0.3984 | 0.6312 |
| 0.1852 | 5.8251 | 1066 | 0.4156 | 0.7010 | 0.4156 | 0.6447 |
| 0.1852 | 5.8361 | 1068 | 0.4465 | 0.7275 | 0.4465 | 0.6682 |
| 0.1852 | 5.8470 | 1070 | 0.4563 | 0.7226 | 0.4563 | 0.6755 |
| 0.1852 | 5.8579 | 1072 | 0.4377 | 0.6982 | 0.4377 | 0.6616 |
| 0.1852 | 5.8689 | 1074 | 0.4430 | 0.7050 | 0.4430 | 0.6656 |
| 0.1852 | 5.8798 | 1076 | 0.4522 | 0.7094 | 0.4522 | 0.6725 |
| 0.1852 | 5.8907 | 1078 | 0.4620 | 0.7211 | 0.4620 | 0.6797 |
| 0.1852 | 5.9016 | 1080 | 0.4378 | 0.7082 | 0.4378 | 0.6617 |
| 0.1852 | 5.9126 | 1082 | 0.4189 | 0.6963 | 0.4189 | 0.6472 |
| 0.1852 | 5.9235 | 1084 | 0.4231 | 0.7008 | 0.4231 | 0.6504 |
| 0.1852 | 5.9344 | 1086 | 0.4503 | 0.7162 | 0.4503 | 0.6711 |
| 0.1852 | 5.9454 | 1088 | 0.4532 | 0.7163 | 0.4532 | 0.6732 |
| 0.1852 | 5.9563 | 1090 | 0.4238 | 0.6983 | 0.4238 | 0.6510 |
| 0.1852 | 5.9672 | 1092 | 0.4142 | 0.6819 | 0.4142 | 0.6436 |
| 0.1852 | 5.9781 | 1094 | 0.4034 | 0.6244 | 0.4034 | 0.6352 |
| 0.1852 | 5.9891 | 1096 | 0.4058 | 0.6491 | 0.4058 | 0.6370 |
| 0.1852 | 6.0 | 1098 | 0.4351 | 0.7017 | 0.4351 | 0.6596 |
| 0.1852 | 6.0109 | 1100 | 0.4694 | 0.7113 | 0.4694 | 0.6852 |
| 0.1852 | 6.0219 | 1102 | 0.4650 | 0.7147 | 0.4650 | 0.6819 |
| 0.1852 | 6.0328 | 1104 | 0.4280 | 0.6761 | 0.4280 | 0.6542 |
| 0.1852 | 6.0437 | 1106 | 0.4276 | 0.6620 | 0.4276 | 0.6539 |
| 0.1852 | 6.0546 | 1108 | 0.4554 | 0.6883 | 0.4554 | 0.6748 |
| 0.1852 | 6.0656 | 1110 | 0.5245 | 0.7137 | 0.5245 | 0.7242 |
| 0.1852 | 6.0765 | 1112 | 0.5092 | 0.7145 | 0.5092 | 0.7136 |
| 0.1852 | 6.0874 | 1114 | 0.4449 | 0.6797 | 0.4449 | 0.6670 |
| 0.1852 | 6.0984 | 1116 | 0.4200 | 0.6340 | 0.4200 | 0.6481 |
| 0.1852 | 6.1093 | 1118 | 0.4299 | 0.6769 | 0.4299 | 0.6557 |
| 0.1852 | 6.1202 | 1120 | 0.4325 | 0.6834 | 0.4325 | 0.6576 |
| 0.1852 | 6.1311 | 1122 | 0.4192 | 0.6805 | 0.4192 | 0.6474 |
| 0.1852 | 6.1421 | 1124 | 0.4198 | 0.6868 | 0.4198 | 0.6479 |
| 0.1852 | 6.1530 | 1126 | 0.4223 | 0.6880 | 0.4223 | 0.6498 |
| 0.1852 | 6.1639 | 1128 | 0.4492 | 0.7026 | 0.4492 | 0.6702 |
| 0.1852 | 6.1749 | 1130 | 0.4286 | 0.6797 | 0.4286 | 0.6547 |
| 0.1852 | 6.1858 | 1132 | 0.4104 | 0.6358 | 0.4104 | 0.6406 |
| 0.1852 | 6.1967 | 1134 | 0.4158 | 0.6287 | 0.4158 | 0.6449 |
| 0.1852 | 6.2077 | 1136 | 0.4626 | 0.7050 | 0.4626 | 0.6802 |
| 0.1852 | 6.2186 | 1138 | 0.4914 | 0.7001 | 0.4914 | 0.7010 |
| 0.1852 | 6.2295 | 1140 | 0.4733 | 0.7007 | 0.4733 | 0.6880 |
| 0.1852 | 6.2404 | 1142 | 0.5011 | 0.6916 | 0.5011 | 0.7079 |
| 0.1852 | 6.2514 | 1144 | 0.4694 | 0.6893 | 0.4694 | 0.6851 |
| 0.1852 | 6.2623 | 1146 | 0.4360 | 0.6512 | 0.4360 | 0.6603 |
| 0.1852 | 6.2732 | 1148 | 0.4260 | 0.6550 | 0.4260 | 0.6527 |
| 0.1852 | 6.2842 | 1150 | 0.4483 | 0.7115 | 0.4483 | 0.6696 |
| 0.1852 | 6.2951 | 1152 | 0.4879 | 0.7198 | 0.4879 | 0.6985 |
| 0.1852 | 6.3060 | 1154 | 0.4669 | 0.7204 | 0.4669 | 0.6833 |
| 0.1852 | 6.3169 | 1156 | 0.4344 | 0.7046 | 0.4344 | 0.6591 |
| 0.1852 | 6.3279 | 1158 | 0.4213 | 0.6813 | 0.4213 | 0.6491 |
| 0.1852 | 6.3388 | 1160 | 0.4346 | 0.7000 | 0.4346 | 0.6593 |
| 0.1852 | 6.3497 | 1162 | 0.4357 | 0.7043 | 0.4357 | 0.6600 |
| 0.1852 | 6.3607 | 1164 | 0.4407 | 0.7048 | 0.4407 | 0.6638 |
| 0.1852 | 6.3716 | 1166 | 0.4519 | 0.7062 | 0.4519 | 0.6723 |
| 0.1852 | 6.3825 | 1168 | 0.4799 | 0.7116 | 0.4799 | 0.6928 |
| 0.1852 | 6.3934 | 1170 | 0.4874 | 0.7101 | 0.4874 | 0.6981 |
| 0.1852 | 6.4044 | 1172 | 0.4477 | 0.7031 | 0.4477 | 0.6691 |
| 0.1852 | 6.4153 | 1174 | 0.4559 | 0.7047 | 0.4559 | 0.6752 |
| 0.1852 | 6.4262 | 1176 | 0.4840 | 0.7051 | 0.4840 | 0.6957 |
| 0.1852 | 6.4372 | 1178 | 0.4612 | 0.6998 | 0.4612 | 0.6791 |
| 0.1852 | 6.4481 | 1180 | 0.4506 | 0.7047 | 0.4506 | 0.6713 |
| 0.1852 | 6.4590 | 1182 | 0.4418 | 0.7175 | 0.4418 | 0.6647 |
| 0.1852 | 6.4699 | 1184 | 0.4305 | 0.7111 | 0.4305 | 0.6561 |
| 0.1852 | 6.4809 | 1186 | 0.4645 | 0.7206 | 0.4645 | 0.6816 |
| 0.1852 | 6.4918 | 1188 | 0.4495 | 0.7247 | 0.4495 | 0.6705 |
| 0.1852 | 6.5027 | 1190 | 0.4103 | 0.6605 | 0.4103 | 0.6406 |
| 0.1852 | 6.5137 | 1192 | 0.4052 | 0.6451 | 0.4052 | 0.6366 |
| 0.1852 | 6.5246 | 1194 | 0.4201 | 0.7050 | 0.4201 | 0.6482 |
| 0.1852 | 6.5355 | 1196 | 0.4506 | 0.7282 | 0.4506 | 0.6712 |
| 0.1852 | 6.5464 | 1198 | 0.5071 | 0.7341 | 0.5071 | 0.7121 |
| 0.1852 | 6.5574 | 1200 | 0.4744 | 0.7344 | 0.4744 | 0.6888 |
| 0.1852 | 6.5683 | 1202 | 0.4147 | 0.6918 | 0.4147 | 0.6440 |
| 0.1852 | 6.5792 | 1204 | 0.4005 | 0.6466 | 0.4005 | 0.6329 |
| 0.1852 | 6.5902 | 1206 | 0.4075 | 0.6733 | 0.4075 | 0.6384 |
| 0.1852 | 6.6011 | 1208 | 0.4383 | 0.7230 | 0.4383 | 0.6620 |
| 0.1852 | 6.6120 | 1210 | 0.4884 | 0.7323 | 0.4884 | 0.6988 |
| 0.1852 | 6.6230 | 1212 | 0.4779 | 0.7242 | 0.4779 | 0.6913 |
| 0.1852 | 6.6339 | 1214 | 0.4233 | 0.6798 | 0.4233 | 0.6506 |
| 0.1852 | 6.6448 | 1216 | 0.4066 | 0.6152 | 0.4066 | 0.6377 |
| 0.1852 | 6.6557 | 1218 | 0.4090 | 0.6477 | 0.4090 | 0.6396 |
| 0.1852 | 6.6667 | 1220 | 0.4467 | 0.7055 | 0.4467 | 0.6684 |
| 0.1852 | 6.6776 | 1222 | 0.4711 | 0.7075 | 0.4711 | 0.6864 |
| 0.1852 | 6.6885 | 1224 | 0.4467 | 0.6987 | 0.4467 | 0.6684 |
| 0.1852 | 6.6995 | 1226 | 0.4199 | 0.6541 | 0.4199 | 0.6480 |
| 0.1852 | 6.7104 | 1228 | 0.4239 | 0.6494 | 0.4239 | 0.6511 |
| 0.1852 | 6.7213 | 1230 | 0.4368 | 0.6788 | 0.4368 | 0.6609 |
| 0.1852 | 6.7322 | 1232 | 0.4578 | 0.6976 | 0.4578 | 0.6766 |
| 0.1852 | 6.7432 | 1234 | 0.4486 | 0.6788 | 0.4486 | 0.6698 |
| 0.1852 | 6.7541 | 1236 | 0.4365 | 0.6617 | 0.4365 | 0.6607 |
| 0.1852 | 6.7650 | 1238 | 0.4243 | 0.6310 | 0.4243 | 0.6514 |
| 0.1852 | 6.7760 | 1240 | 0.4377 | 0.6804 | 0.4377 | 0.6616 |
| 0.1852 | 6.7869 | 1242 | 0.4795 | 0.7081 | 0.4795 | 0.6924 |
| 0.1852 | 6.7978 | 1244 | 0.4931 | 0.7244 | 0.4931 | 0.7022 |
| 0.1852 | 6.8087 | 1246 | 0.4598 | 0.7215 | 0.4598 | 0.6781 |
| 0.1852 | 6.8197 | 1248 | 0.4083 | 0.6790 | 0.4083 | 0.6390 |
| 0.1852 | 6.8306 | 1250 | 0.3938 | 0.6380 | 0.3938 | 0.6275 |
| 0.1852 | 6.8415 | 1252 | 0.3991 | 0.5910 | 0.3991 | 0.6317 |
| 0.1852 | 6.8525 | 1254 | 0.4002 | 0.6481 | 0.4002 | 0.6326 |
| 0.1852 | 6.8634 | 1256 | 0.4381 | 0.6848 | 0.4381 | 0.6619 |
| 0.1852 | 6.8743 | 1258 | 0.4608 | 0.7174 | 0.4608 | 0.6789 |
| 0.1852 | 6.8852 | 1260 | 0.4344 | 0.6823 | 0.4344 | 0.6591 |
| 0.1852 | 6.8962 | 1262 | 0.4248 | 0.6718 | 0.4248 | 0.6518 |
| 0.1852 | 6.9071 | 1264 | 0.4297 | 0.6717 | 0.4297 | 0.6555 |
| 0.1852 | 6.9180 | 1266 | 0.4393 | 0.6919 | 0.4393 | 0.6628 |
| 0.1852 | 6.9290 | 1268 | 0.4375 | 0.6946 | 0.4375 | 0.6615 |
| 0.1852 | 6.9399 | 1270 | 0.4354 | 0.6885 | 0.4354 | 0.6598 |
| 0.1852 | 6.9508 | 1272 | 0.4268 | 0.6819 | 0.4268 | 0.6533 |
| 0.1852 | 6.9617 | 1274 | 0.4318 | 0.6843 | 0.4318 | 0.6571 |
| 0.1852 | 6.9727 | 1276 | 0.4435 | 0.6998 | 0.4435 | 0.6659 |
| 0.1852 | 6.9836 | 1278 | 0.4267 | 0.6840 | 0.4267 | 0.6532 |
| 0.1852 | 6.9945 | 1280 | 0.4029 | 0.6604 | 0.4029 | 0.6347 |
| 0.1852 | 7.0055 | 1282 | 0.4027 | 0.6528 | 0.4027 | 0.6346 |
| 0.1852 | 7.0164 | 1284 | 0.4186 | 0.6806 | 0.4186 | 0.6470 |
| 0.1852 | 7.0273 | 1286 | 0.4657 | 0.7037 | 0.4657 | 0.6824 |
| 0.1852 | 7.0383 | 1288 | 0.4849 | 0.6944 | 0.4849 | 0.6963 |
| 0.1852 | 7.0492 | 1290 | 0.4693 | 0.6904 | 0.4693 | 0.6850 |
| 0.1852 | 7.0601 | 1292 | 0.4508 | 0.6703 | 0.4508 | 0.6715 |
| 0.1852 | 7.0710 | 1294 | 0.4459 | 0.6717 | 0.4459 | 0.6677 |
| 0.1852 | 7.0820 | 1296 | 0.4380 | 0.6693 | 0.4380 | 0.6618 |
| 0.1852 | 7.0929 | 1298 | 0.4264 | 0.6453 | 0.4264 | 0.6530 |
| 0.1852 | 7.1038 | 1300 | 0.4285 | 0.6649 | 0.4285 | 0.6546 |
| 0.1852 | 7.1148 | 1302 | 0.4343 | 0.6704 | 0.4343 | 0.6590 |
| 0.1852 | 7.1257 | 1304 | 0.4579 | 0.6918 | 0.4579 | 0.6767 |
| 0.1852 | 7.1366 | 1306 | 0.4737 | 0.6930 | 0.4737 | 0.6883 |
| 0.1852 | 7.1475 | 1308 | 0.4556 | 0.6918 | 0.4556 | 0.6750 |
| 0.1852 | 7.1585 | 1310 | 0.4215 | 0.6299 | 0.4215 | 0.6492 |
| 0.1852 | 7.1694 | 1312 | 0.4194 | 0.6099 | 0.4194 | 0.6476 |
| 0.1852 | 7.1803 | 1314 | 0.4180 | 0.6305 | 0.4180 | 0.6465 |
| 0.1852 | 7.1913 | 1316 | 0.4355 | 0.6876 | 0.4355 | 0.6599 |
| 0.1852 | 7.2022 | 1318 | 0.4809 | 0.7009 | 0.4809 | 0.6935 |
| 0.1852 | 7.2131 | 1320 | 0.4778 | 0.7091 | 0.4778 | 0.6912 |
| 0.1852 | 7.2240 | 1322 | 0.4407 | 0.6889 | 0.4407 | 0.6638 |
| 0.1852 | 7.2350 | 1324 | 0.4099 | 0.6628 | 0.4099 | 0.6402 |
| 0.1852 | 7.2459 | 1326 | 0.4100 | 0.6632 | 0.4100 | 0.6403 |
| 0.1852 | 7.2568 | 1328 | 0.4157 | 0.6854 | 0.4157 | 0.6448 |
| 0.1852 | 7.2678 | 1330 | 0.4222 | 0.6841 | 0.4222 | 0.6497 |
| 0.1852 | 7.2787 | 1332 | 0.4325 | 0.6869 | 0.4325 | 0.6577 |
| 0.1852 | 7.2896 | 1334 | 0.4407 | 0.6864 | 0.4407 | 0.6638 |
| 0.1852 | 7.3005 | 1336 | 0.4551 | 0.6968 | 0.4551 | 0.6746 |
| 0.1852 | 7.3115 | 1338 | 0.4510 | 0.6971 | 0.4510 | 0.6716 |
| 0.1852 | 7.3224 | 1340 | 0.4341 | 0.6888 | 0.4341 | 0.6589 |
| 0.1852 | 7.3333 | 1342 | 0.4300 | 0.6841 | 0.4300 | 0.6557 |
| 0.1852 | 7.3443 | 1344 | 0.4179 | 0.6815 | 0.4179 | 0.6464 |
| 0.1852 | 7.3552 | 1346 | 0.4197 | 0.6805 | 0.4197 | 0.6478 |
| 0.1852 | 7.3661 | 1348 | 0.4346 | 0.6918 | 0.4346 | 0.6592 |
| 0.1852 | 7.3770 | 1350 | 0.4490 | 0.6891 | 0.4490 | 0.6701 |
| 0.1852 | 7.3880 | 1352 | 0.4805 | 0.6963 | 0.4805 | 0.6932 |
| 0.1852 | 7.3989 | 1354 | 0.4867 | 0.6956 | 0.4867 | 0.6976 |
| 0.1852 | 7.4098 | 1356 | 0.4546 | 0.6946 | 0.4546 | 0.6743 |
| 0.1852 | 7.4208 | 1358 | 0.4333 | 0.6687 | 0.4333 | 0.6583 |
| 0.1852 | 7.4317 | 1360 | 0.4290 | 0.6681 | 0.4290 | 0.6549 |
| 0.1852 | 7.4426 | 1362 | 0.4357 | 0.6868 | 0.4357 | 0.6601 |
| 0.1852 | 7.4536 | 1364 | 0.4292 | 0.6737 | 0.4292 | 0.6552 |
| 0.1852 | 7.4645 | 1366 | 0.4217 | 0.6714 | 0.4217 | 0.6494 |
| 0.1852 | 7.4754 | 1368 | 0.4232 | 0.6777 | 0.4232 | 0.6505 |
| 0.1852 | 7.4863 | 1370 | 0.4401 | 0.6968 | 0.4401 | 0.6634 |
| 0.1852 | 7.4973 | 1372 | 0.4635 | 0.7053 | 0.4635 | 0.6808 |
| 0.1852 | 7.5082 | 1374 | 0.4409 | 0.7022 | 0.4409 | 0.6640 |
| 0.1852 | 7.5191 | 1376 | 0.4160 | 0.6734 | 0.4160 | 0.6450 |
| 0.1852 | 7.5301 | 1378 | 0.4157 | 0.6535 | 0.4157 | 0.6447 |
| 0.1852 | 7.5410 | 1380 | 0.4233 | 0.6606 | 0.4233 | 0.6506 |
| 0.1852 | 7.5519 | 1382 | 0.4605 | 0.7065 | 0.4605 | 0.6786 |
| 0.1852 | 7.5628 | 1384 | 0.5111 | 0.7210 | 0.5111 | 0.7149 |
| 0.1852 | 7.5738 | 1386 | 0.5051 | 0.7207 | 0.5051 | 0.7107 |
| 0.1852 | 7.5847 | 1388 | 0.4839 | 0.7172 | 0.4839 | 0.6956 |
| 0.1852 | 7.5956 | 1390 | 0.4717 | 0.7065 | 0.4717 | 0.6868 |
| 0.1852 | 7.6066 | 1392 | 0.4343 | 0.6925 | 0.4343 | 0.6590 |
| 0.1852 | 7.6175 | 1394 | 0.4239 | 0.6708 | 0.4239 | 0.6511 |
| 0.1852 | 7.6284 | 1396 | 0.4332 | 0.6927 | 0.4332 | 0.6582 |
| 0.1852 | 7.6393 | 1398 | 0.4614 | 0.7086 | 0.4614 | 0.6793 |
| 0.1852 | 7.6503 | 1400 | 0.4736 | 0.7113 | 0.4736 | 0.6882 |
| 0.1852 | 7.6612 | 1402 | 0.4491 | 0.7036 | 0.4491 | 0.6702 |
| 0.1852 | 7.6721 | 1404 | 0.4178 | 0.6767 | 0.4178 | 0.6464 |
| 0.1852 | 7.6831 | 1406 | 0.4130 | 0.6682 | 0.4130 | 0.6427 |
| 0.1852 | 7.6940 | 1408 | 0.4203 | 0.6773 | 0.4203 | 0.6483 |
| 0.1852 | 7.7049 | 1410 | 0.4466 | 0.6960 | 0.4466 | 0.6683 |
| 0.1852 | 7.7158 | 1412 | 0.4622 | 0.6976 | 0.4622 | 0.6798 |
| 0.1852 | 7.7268 | 1414 | 0.4506 | 0.6874 | 0.4506 | 0.6713 |
| 0.1852 | 7.7377 | 1416 | 0.4271 | 0.6711 | 0.4271 | 0.6535 |
| 0.1852 | 7.7486 | 1418 | 0.4153 | 0.6489 | 0.4153 | 0.6445 |
| 0.1852 | 7.7596 | 1420 | 0.4139 | 0.6568 | 0.4139 | 0.6434 |
| 0.1852 | 7.7705 | 1422 | 0.4192 | 0.6774 | 0.4192 | 0.6474 |
| 0.1852 | 7.7814 | 1424 | 0.4433 | 0.6980 | 0.4433 | 0.6658 |
| 0.1852 | 7.7923 | 1426 | 0.4484 | 0.6972 | 0.4484 | 0.6696 |
| 0.1852 | 7.8033 | 1428 | 0.4346 | 0.6842 | 0.4346 | 0.6592 |
| 0.1852 | 7.8142 | 1430 | 0.4280 | 0.6850 | 0.4280 | 0.6542 |
| 0.1852 | 7.8251 | 1432 | 0.4237 | 0.6789 | 0.4237 | 0.6509 |
| 0.1852 | 7.8361 | 1434 | 0.4186 | 0.6638 | 0.4186 | 0.6470 |
| 0.1852 | 7.8470 | 1436 | 0.4172 | 0.6560 | 0.4172 | 0.6459 |
| 0.1852 | 7.8579 | 1438 | 0.4202 | 0.6600 | 0.4202 | 0.6482 |
| 0.1852 | 7.8689 | 1440 | 0.4404 | 0.6822 | 0.4404 | 0.6637 |
| 0.1852 | 7.8798 | 1442 | 0.4568 | 0.7003 | 0.4568 | 0.6759 |
| 0.1852 | 7.8907 | 1444 | 0.4518 | 0.6967 | 0.4518 | 0.6721 |
| 0.1852 | 7.9016 | 1446 | 0.4608 | 0.6990 | 0.4608 | 0.6789 |
| 0.1852 | 7.9126 | 1448 | 0.4810 | 0.7035 | 0.4810 | 0.6936 |
| 0.1852 | 7.9235 | 1450 | 0.4910 | 0.7067 | 0.4910 | 0.7007 |
| 0.1852 | 7.9344 | 1452 | 0.4751 | 0.6987 | 0.4751 | 0.6893 |
| 0.1852 | 7.9454 | 1454 | 0.4501 | 0.6976 | 0.4501 | 0.6709 |
| 0.1852 | 7.9563 | 1456 | 0.4303 | 0.6766 | 0.4303 | 0.6560 |
| 0.1852 | 7.9672 | 1458 | 0.4254 | 0.6711 | 0.4254 | 0.6522 |
| 0.1852 | 7.9781 | 1460 | 0.4245 | 0.6826 | 0.4245 | 0.6516 |
| 0.1852 | 7.9891 | 1462 | 0.4376 | 0.6995 | 0.4376 | 0.6615 |
| 0.1852 | 8.0 | 1464 | 0.4732 | 0.6968 | 0.4732 | 0.6879 |
| 0.1852 | 8.0109 | 1466 | 0.4776 | 0.6990 | 0.4776 | 0.6911 |
| 0.1852 | 8.0219 | 1468 | 0.4555 | 0.6934 | 0.4555 | 0.6749 |
| 0.1852 | 8.0328 | 1470 | 0.4279 | 0.6896 | 0.4279 | 0.6541 |
| 0.1852 | 8.0437 | 1472 | 0.4052 | 0.6344 | 0.4052 | 0.6366 |
| 0.1852 | 8.0546 | 1474 | 0.4020 | 0.6226 | 0.4020 | 0.6341 |
| 0.1852 | 8.0656 | 1476 | 0.4028 | 0.6442 | 0.4028 | 0.6347 |
| 0.1852 | 8.0765 | 1478 | 0.4190 | 0.6733 | 0.4190 | 0.6473 |
| 0.1852 | 8.0874 | 1480 | 0.4444 | 0.6918 | 0.4444 | 0.6666 |
| 0.1852 | 8.0984 | 1482 | 0.4747 | 0.6942 | 0.4747 | 0.6890 |
| 0.1852 | 8.1093 | 1484 | 0.4801 | 0.6944 | 0.4801 | 0.6929 |
| 0.1852 | 8.1202 | 1486 | 0.4508 | 0.6868 | 0.4508 | 0.6714 |
| 0.1852 | 8.1311 | 1488 | 0.4249 | 0.6824 | 0.4249 | 0.6518 |
| 0.1852 | 8.1421 | 1490 | 0.4144 | 0.6426 | 0.4144 | 0.6437 |
| 0.1852 | 8.1530 | 1492 | 0.4156 | 0.6520 | 0.4156 | 0.6447 |
| 0.1852 | 8.1639 | 1494 | 0.4291 | 0.6830 | 0.4291 | 0.6550 |
| 0.1852 | 8.1749 | 1496 | 0.4531 | 0.7015 | 0.4531 | 0.6732 |
| 0.1852 | 8.1858 | 1498 | 0.4617 | 0.6963 | 0.4617 | 0.6795 |
| 0.0966 | 8.1967 | 1500 | 0.4532 | 0.6991 | 0.4532 | 0.6732 |
| 0.0966 | 8.2077 | 1502 | 0.4287 | 0.6837 | 0.4287 | 0.6547 |
| 0.0966 | 8.2186 | 1504 | 0.4282 | 0.6810 | 0.4282 | 0.6544 |
| 0.0966 | 8.2295 | 1506 | 0.4372 | 0.6921 | 0.4372 | 0.6612 |
| 0.0966 | 8.2404 | 1508 | 0.4438 | 0.6869 | 0.4438 | 0.6662 |
| 0.0966 | 8.2514 | 1510 | 0.4343 | 0.6828 | 0.4343 | 0.6590 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
yjwon/mp_mistral7bv3_sft_dpo_beta1e-1_epoch5
|
yjwon
| 2024-11-06T01:45:18Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T01:44:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjwon/mp_mistral7bv3_sft_dpo_beta1e-1_epoch4
|
yjwon
| 2024-11-06T01:43:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T01:41:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjwon/mp_mistral7bv3_sft_ogd_rms_epoch5
|
yjwon
| 2024-11-06T01:37:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T01:32:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koshimaki/dinosiglip-224px-1b-v6-7
|
koshimaki
| 2024-11-06T01:36:54Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"prismatic",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-11-06T01:33:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xueyj/task-13-google-gemma-2b
|
xueyj
| 2024-11-06T01:33:21Z | 327 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-10-11T13:59:51Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
Judiht/Qwen2_5-1_5B_dataset20_20241105_201325
|
Judiht
| 2024-11-06T01:17:32Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"region:us"
] | null | 2024-11-06T01:17:30Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
koshimaki/dinosiglip-224px-1b-v6-3
|
koshimaki
| 2024-11-06T01:11:29Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"prismatic",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-11-06T01:08:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf
|
RichardErkhov
| 2024-11-06T01:10:04Z | 6 | 0 | null |
[
"gguf",
"arxiv:2405.00675",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T21:08:04Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral7B-PairRM-SPPO-Iter1 - GGUF
- Model creator: https://huggingface.co/UCLA-AGI/
- Original model: https://huggingface.co/UCLA-AGI/Mistral7B-PairRM-SPPO-Iter1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral7B-PairRM-SPPO-Iter1.Q2_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q3_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral7B-PairRM-SPPO-Iter1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q4_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral7B-PairRM-SPPO-Iter1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q4_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q4_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q5_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q5_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q5_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q6_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral7B-PairRM-SPPO-Iter1.Q8_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_Mistral7B-PairRM-SPPO-Iter1-gguf/blob/main/Mistral7B-PairRM-SPPO-Iter1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- openbmb/UltraFeedback
language:
- en
pipeline_tag: text-generation
---
Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
# Mistral7B-PairRM-SPPO-Iter1
This model was developed using [Self-Play Preference Optimization](https://arxiv.org/abs/2405.00675) at iteration 1, based on the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) architecture as starting point. We utilized the prompt sets from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, splited to 3 parts for 3 iterations by [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset). All responses used are synthetic.
**This is the model reported in the paper** , with K=5 (generate 5 responses per iteration). We attached the Arena-Hard eval results in this model page.
## Links to Other Models
- [Mistral7B-PairRM-SPPO-Iter1](https://huggingface.co/UCLA-AGI/Mistral7B-PairRM-SPPO-Iter1)
- [Mistral7B-PairRM-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Mistral7B-PairRM-SPPO-Iter2)
- [Mistral7B-PairRM-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Mistral7B-PairRM-SPPO-Iter3)
- [Mistral7B-PairRM-SPPO](https://huggingface.co/UCLA-AGI/Mistral7B-PairRM-SPPO)
### Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: Apache-2.0
- Finetuned from model: mistralai/Mistral-7B-Instruct-v0.2
## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/)
| Model | LC. Win Rate | Win Rate | Avg. Length |
|-------------------------------------------|:------------:|:--------:|:-----------:|
| Mistral7B-PairRM-SPPO Iter 1 | 24.79 | 23.51 | 1855 |
| Mistral7B-PairRM-SPPO Iter 2 | 26.89 | 27.62 | 2019 |
| Mistral7B-PairRM-SPPO Iter 3 | 28.53 | 31.02 | 2163 |
| Mistral7B-PairRM-SPPO Iter 1 (best-of-16) | 28.71 | 27.77 | 1901 |
| Mistral7B-PairRM-SPPO Iter 2 (best-of-16) | 31.23 | 32.12 | 2035 |
| Mistral7B-PairRM-SPPO Iter 3 (best-of-16) | 32.13 | 34.94 | 2174 |
## [Arena-Hard Evaluation Results](https://github.com/lm-sys/arena-hard)
Model | Score | 95% CI | average \# Tokens |
|----------|-----------|--------------|-----------|
Mistral7B-PairRM-SPPO-Iter3| 23.3 | (-1.8, 1.8)|578|
## [Open LLM Leaderboard Evaluation Results](https://github.com/EleutherAI/lm-evaluation-harness)
Results are reported by using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) v0.4.1
| | arc_challenge | truthfulqa_mc2 | winogrande | gsm8k | hellaswag | mmlu | average |
|--------|---------------|----------------|------------|-------|-----------|-------|---------|
| Mistral7B-PairRM-SPPO Iter 1 | 65.02 | 69.4 | 77.82 | 43.82 | 85.11 | 58.84 | 66.67 |
| Mistral7B-PairRM-SPPO Iter 2 | 65.53 | 69.55 | 77.03 | 44.35 | 85.29 | 58.72 | 66.75 |
| Mistral7B-PairRM-SPPO Iter 3 | 65.36 | 69.97 | 76.8 | 42.68 | 85.16 | 58.45 | 66.4 |
## [MT-Bench Evaluation Results](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge)
| | 1st Turn | 2nd Turn | Average |
|--------|----------|----------|---------|
| Mistral7B-PairRM-SPPO Iter 1 | 7.63 | 6.79 | 7.21 |
| Mistral7B-PairRM-SPPO Iter 2 | 7.90 | 7.08 | 7.49 |
| Mistral7B-PairRM-SPPO Iter 3 | 7.84 | 7.34 | 7.59 |
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- eta: 1000
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 1
- seed: 42
- distributed_type: deepspeed_zero3
- num_devices: 8
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_train_epochs: 18.0 (stop at epoch=1.0)
## Citation
```
@misc{wu2024self,
title={Self-Play Preference Optimization for Language Model Alignment},
author={Wu, Yue and Sun, Zhiqing and Yuan, Huizhuo and Ji, Kaixuan and Yang, Yiming and Gu, Quanquan},
year={2024},
eprint={2405.00675},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
minoosh/bert-reg-biencoder-mse
|
minoosh
| 2024-11-06T01:09:42Z | 13 | 0 | null |
[
"pytorch",
"safetensors",
"bert",
"regression",
"biencoder",
"similarity",
"text-similarity",
"en",
"region:us"
] | null | 2024-11-06T00:01:55Z |
---
language: en
tags:
- bert
- regression
- biencoder
- similarity
pipeline_tag: text-similarity
---
# BiEncoder Regression Model
This model is a BiEncoder architecture that outputs similarity scores between text pairs.
## Model Details
- Base Model: bert-base-uncased
- Task: Regression
- Architecture: BiEncoder with cosine similarity
- Loss Function: mse
## Usage
```python
from transformers import AutoTokenizer, AutoModel
from modeling import BiEncoderModelRegression
# Load model components
tokenizer = AutoTokenizer.from_pretrained("minoosh/bert-reg-biencoder-mse")
base_model = AutoModel.from_pretrained("bert-base-uncased")
model = BiEncoderModelRegression(base_model, loss_fn="mse")
# Load weights
state_dict = torch.load("pytorch_model.bin")
model.load_state_dict(state_dict)
# Prepare inputs
texts1 = ["first text"]
texts2 = ["second text"]
inputs = tokenizer(
texts1, texts2,
padding=True,
truncation=True,
return_tensors="pt"
)
# Get similarity scores
outputs = model(**inputs)
similarity_scores = outputs["logits"]
```
## Metrics
The model was trained using mse loss and evaluated using:
- Mean Squared Error (MSE)
- Mean Absolute Error (MAE)
- Pearson Correlation
- Spearman Correlation
- Cosine Similarity
|
koshimaki/dinosiglip-224px-1b-v6-1
|
koshimaki
| 2024-11-06T01:01:49Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"prismatic",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-11-06T00:58:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
julientrn/Julien
|
julientrn
| 2024-11-06T00:56:44Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-06T00:55:33Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Ju21eN
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Julien
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Ju21eN` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
zaddyzaddy/gemma-final-boss
|
zaddyzaddy
| 2024-11-06T00:55:48Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-06T00:53:08Z |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** zaddyzaddy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Franken-MoE-18B-v0.1-i1-GGUF
|
mradermacher
| 2024-11-06T00:53:08Z | 84 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"moe",
"en",
"base_model:MaziyarPanahi/Franken-MoE-18B-v0.1",
"base_model:quantized:MaziyarPanahi/Franken-MoE-18B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T20:27:22Z |
---
base_model: MaziyarPanahi/Franken-MoE-18B-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MaziyarPanahi/Franken-MoE-18B-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 4.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 6.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 10.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Franken-MoE-18B-v0.1-i1-GGUF/resolve/main/Franken-MoE-18B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 15.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Xu-Ouyang/pythia-12b-deduped-int8-step2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-06T00:52:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-06T00:38:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bmysec/task-13-Qwen-Qwen1.5-1.8B
|
bmysec
| 2024-11-06T00:49:21Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-10-14T14:18:00Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF
|
mradermacher
| 2024-11-06T00:18:13Z | 50 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:RLHFlow/Llama3-v2-iterative-DPO-iter3",
"base_model:quantized:RLHFlow/Llama3-v2-iterative-DPO-iter3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-05T22:24:35Z |
---
base_model: RLHFlow/Llama3-v2-iterative-DPO-iter3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RLHFlow/Llama3-v2-iterative-DPO-iter3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter3-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MayBashendy/ASAP_FineTuningBERT_Aug_k10_task1_organization_fold0
|
MayBashendy
| 2024-11-06T00:08:53Z | 166 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T23:20:42Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k10_task1_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k10_task1_organization_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4549
- Qwk: 0.5672
- Mse: 0.4549
- Rmse: 0.6745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0114 | 2 | 9.7347 | 0.0 | 9.7347 | 3.1200 |
| No log | 0.0227 | 4 | 8.1742 | 0.0130 | 8.1742 | 2.8591 |
| No log | 0.0341 | 6 | 7.1950 | 0.0054 | 7.1950 | 2.6823 |
| No log | 0.0455 | 8 | 6.4415 | 0.0018 | 6.4415 | 2.5380 |
| No log | 0.0568 | 10 | 5.6375 | 0.0 | 5.6375 | 2.3744 |
| No log | 0.0682 | 12 | 4.7911 | 0.0 | 4.7911 | 2.1889 |
| No log | 0.0795 | 14 | 3.9799 | 0.0515 | 3.9799 | 1.9950 |
| No log | 0.0909 | 16 | 3.2144 | 0.0139 | 3.2144 | 1.7929 |
| No log | 0.1023 | 18 | 2.5316 | 0.0115 | 2.5316 | 1.5911 |
| No log | 0.1136 | 20 | 1.9689 | 0.0077 | 1.9689 | 1.4032 |
| No log | 0.125 | 22 | 1.5082 | 0.0048 | 1.5082 | 1.2281 |
| No log | 0.1364 | 24 | 1.1894 | 0.0882 | 1.1894 | 1.0906 |
| No log | 0.1477 | 26 | 0.9602 | 0.0484 | 0.9602 | 0.9799 |
| No log | 0.1591 | 28 | 0.8288 | 0.0316 | 0.8288 | 0.9104 |
| No log | 0.1705 | 30 | 0.7857 | 0.0316 | 0.7857 | 0.8864 |
| No log | 0.1818 | 32 | 0.8212 | 0.0212 | 0.8212 | 0.9062 |
| No log | 0.1932 | 34 | 0.7208 | 0.0316 | 0.7208 | 0.8490 |
| No log | 0.2045 | 36 | 0.8403 | 0.0316 | 0.8403 | 0.9167 |
| No log | 0.2159 | 38 | 0.7159 | 0.0316 | 0.7159 | 0.8461 |
| No log | 0.2273 | 40 | 0.7460 | 0.0380 | 0.7460 | 0.8637 |
| No log | 0.2386 | 42 | 0.8015 | 0.0316 | 0.8015 | 0.8953 |
| No log | 0.25 | 44 | 0.7814 | 0.0316 | 0.7814 | 0.8840 |
| No log | 0.2614 | 46 | 0.7525 | 0.0316 | 0.7525 | 0.8675 |
| No log | 0.2727 | 48 | 0.7453 | 0.0316 | 0.7453 | 0.8633 |
| No log | 0.2841 | 50 | 0.7639 | 0.0316 | 0.7639 | 0.8740 |
| No log | 0.2955 | 52 | 0.8598 | 0.0848 | 0.8598 | 0.9272 |
| No log | 0.3068 | 54 | 0.8665 | 0.1037 | 0.8665 | 0.9309 |
| No log | 0.3182 | 56 | 0.7383 | 0.0861 | 0.7383 | 0.8593 |
| No log | 0.3295 | 58 | 0.7214 | 0.1187 | 0.7214 | 0.8493 |
| No log | 0.3409 | 60 | 0.7287 | 0.1580 | 0.7287 | 0.8536 |
| No log | 0.3523 | 62 | 0.6670 | 0.1663 | 0.6670 | 0.8167 |
| No log | 0.3636 | 64 | 0.7149 | 0.1814 | 0.7149 | 0.8455 |
| No log | 0.375 | 66 | 0.9481 | 0.2045 | 0.9481 | 0.9737 |
| No log | 0.3864 | 68 | 0.7115 | 0.1876 | 0.7115 | 0.8435 |
| No log | 0.3977 | 70 | 0.6716 | 0.1738 | 0.6716 | 0.8195 |
| No log | 0.4091 | 72 | 0.7815 | 0.1975 | 0.7815 | 0.8840 |
| No log | 0.4205 | 74 | 0.8699 | 0.2061 | 0.8699 | 0.9327 |
| No log | 0.4318 | 76 | 0.6762 | 0.1641 | 0.6762 | 0.8223 |
| No log | 0.4432 | 78 | 0.6379 | 0.0583 | 0.6379 | 0.7987 |
| No log | 0.4545 | 80 | 0.6996 | 0.0583 | 0.6996 | 0.8364 |
| No log | 0.4659 | 82 | 0.7439 | 0.0583 | 0.7439 | 0.8625 |
| No log | 0.4773 | 84 | 0.7845 | 0.0583 | 0.7845 | 0.8857 |
| No log | 0.4886 | 86 | 0.7396 | 0.0583 | 0.7396 | 0.8600 |
| No log | 0.5 | 88 | 0.6542 | 0.1172 | 0.6542 | 0.8088 |
| No log | 0.5114 | 90 | 0.6157 | 0.1357 | 0.6157 | 0.7846 |
| No log | 0.5227 | 92 | 0.6159 | 0.1589 | 0.6159 | 0.7848 |
| No log | 0.5341 | 94 | 0.5189 | 0.3523 | 0.5189 | 0.7203 |
| No log | 0.5455 | 96 | 0.5191 | 0.4688 | 0.5191 | 0.7205 |
| No log | 0.5568 | 98 | 0.5207 | 0.3048 | 0.5207 | 0.7216 |
| No log | 0.5682 | 100 | 0.6863 | 0.1892 | 0.6863 | 0.8284 |
| No log | 0.5795 | 102 | 0.6486 | 0.1932 | 0.6486 | 0.8053 |
| No log | 0.5909 | 104 | 0.5452 | 0.1623 | 0.5452 | 0.7384 |
| No log | 0.6023 | 106 | 0.5699 | 0.2629 | 0.5699 | 0.7549 |
| No log | 0.6136 | 108 | 0.6109 | 0.1026 | 0.6109 | 0.7816 |
| No log | 0.625 | 110 | 0.6248 | 0.1159 | 0.6248 | 0.7904 |
| No log | 0.6364 | 112 | 0.6414 | 0.1238 | 0.6414 | 0.8009 |
| No log | 0.6477 | 114 | 0.6175 | 0.1302 | 0.6175 | 0.7858 |
| No log | 0.6591 | 116 | 0.5685 | 0.1252 | 0.5685 | 0.7540 |
| No log | 0.6705 | 118 | 0.5241 | 0.4200 | 0.5241 | 0.7239 |
| No log | 0.6818 | 120 | 0.4807 | 0.4328 | 0.4807 | 0.6934 |
| No log | 0.6932 | 122 | 0.4715 | 0.4761 | 0.4715 | 0.6866 |
| No log | 0.7045 | 124 | 0.5655 | 0.2787 | 0.5655 | 0.7520 |
| No log | 0.7159 | 126 | 0.5116 | 0.3408 | 0.5116 | 0.7153 |
| No log | 0.7273 | 128 | 0.5443 | 0.4952 | 0.5443 | 0.7378 |
| No log | 0.7386 | 130 | 0.5683 | 0.4352 | 0.5683 | 0.7539 |
| No log | 0.75 | 132 | 0.5780 | 0.2798 | 0.5780 | 0.7603 |
| No log | 0.7614 | 134 | 0.6655 | 0.2147 | 0.6655 | 0.8158 |
| No log | 0.7727 | 136 | 0.6038 | 0.2141 | 0.6038 | 0.7770 |
| No log | 0.7841 | 138 | 0.6781 | 0.1793 | 0.6781 | 0.8235 |
| No log | 0.7955 | 140 | 0.6990 | 0.1414 | 0.6990 | 0.8361 |
| No log | 0.8068 | 142 | 0.5826 | 0.3549 | 0.5826 | 0.7633 |
| No log | 0.8182 | 144 | 0.5695 | 0.2389 | 0.5695 | 0.7546 |
| No log | 0.8295 | 146 | 0.5437 | 0.2942 | 0.5437 | 0.7374 |
| No log | 0.8409 | 148 | 0.5217 | 0.3495 | 0.5217 | 0.7223 |
| No log | 0.8523 | 150 | 0.4840 | 0.4863 | 0.4840 | 0.6957 |
| No log | 0.8636 | 152 | 0.4675 | 0.4998 | 0.4675 | 0.6838 |
| No log | 0.875 | 154 | 0.4645 | 0.4539 | 0.4645 | 0.6815 |
| No log | 0.8864 | 156 | 0.4542 | 0.4634 | 0.4542 | 0.6740 |
| No log | 0.8977 | 158 | 0.4414 | 0.5037 | 0.4414 | 0.6644 |
| No log | 0.9091 | 160 | 0.4568 | 0.5425 | 0.4568 | 0.6759 |
| No log | 0.9205 | 162 | 0.5597 | 0.5196 | 0.5597 | 0.7481 |
| No log | 0.9318 | 164 | 0.5233 | 0.5072 | 0.5233 | 0.7234 |
| No log | 0.9432 | 166 | 0.5666 | 0.4907 | 0.5666 | 0.7528 |
| No log | 0.9545 | 168 | 0.5935 | 0.4999 | 0.5935 | 0.7704 |
| No log | 0.9659 | 170 | 0.4759 | 0.4824 | 0.4759 | 0.6898 |
| No log | 0.9773 | 172 | 0.4808 | 0.4194 | 0.4808 | 0.6934 |
| No log | 0.9886 | 174 | 0.4415 | 0.4875 | 0.4415 | 0.6644 |
| No log | 1.0 | 176 | 0.4548 | 0.5475 | 0.4548 | 0.6744 |
| No log | 1.0114 | 178 | 0.4201 | 0.5116 | 0.4201 | 0.6481 |
| No log | 1.0227 | 180 | 0.4823 | 0.4489 | 0.4823 | 0.6945 |
| No log | 1.0341 | 182 | 0.4445 | 0.5460 | 0.4445 | 0.6667 |
| No log | 1.0455 | 184 | 0.4641 | 0.5192 | 0.4641 | 0.6812 |
| No log | 1.0568 | 186 | 0.4289 | 0.5231 | 0.4289 | 0.6549 |
| No log | 1.0682 | 188 | 0.4169 | 0.5301 | 0.4169 | 0.6457 |
| No log | 1.0795 | 190 | 0.4493 | 0.5494 | 0.4493 | 0.6703 |
| No log | 1.0909 | 192 | 0.5693 | 0.5521 | 0.5693 | 0.7545 |
| No log | 1.1023 | 194 | 0.4874 | 0.5605 | 0.4874 | 0.6982 |
| No log | 1.1136 | 196 | 0.4469 | 0.4880 | 0.4469 | 0.6685 |
| No log | 1.125 | 198 | 0.4601 | 0.5066 | 0.4601 | 0.6783 |
| No log | 1.1364 | 200 | 0.5392 | 0.5428 | 0.5392 | 0.7343 |
| No log | 1.1477 | 202 | 0.5681 | 0.5408 | 0.5681 | 0.7537 |
| No log | 1.1591 | 204 | 0.4747 | 0.5852 | 0.4747 | 0.6890 |
| No log | 1.1705 | 206 | 0.4286 | 0.5744 | 0.4286 | 0.6547 |
| No log | 1.1818 | 208 | 0.4256 | 0.5230 | 0.4256 | 0.6523 |
| No log | 1.1932 | 210 | 0.4242 | 0.5249 | 0.4242 | 0.6513 |
| No log | 1.2045 | 212 | 0.4136 | 0.5832 | 0.4136 | 0.6431 |
| No log | 1.2159 | 214 | 0.4189 | 0.5721 | 0.4189 | 0.6472 |
| No log | 1.2273 | 216 | 0.4977 | 0.4883 | 0.4977 | 0.7055 |
| No log | 1.2386 | 218 | 0.5647 | 0.4944 | 0.5647 | 0.7515 |
| No log | 1.25 | 220 | 0.4820 | 0.5600 | 0.4820 | 0.6943 |
| No log | 1.2614 | 222 | 0.4695 | 0.5750 | 0.4695 | 0.6852 |
| No log | 1.2727 | 224 | 0.4458 | 0.5415 | 0.4458 | 0.6677 |
| No log | 1.2841 | 226 | 0.4392 | 0.5580 | 0.4392 | 0.6627 |
| No log | 1.2955 | 228 | 0.5465 | 0.5540 | 0.5465 | 0.7393 |
| No log | 1.3068 | 230 | 0.5870 | 0.5313 | 0.5870 | 0.7662 |
| No log | 1.3182 | 232 | 0.5541 | 0.5446 | 0.5541 | 0.7444 |
| No log | 1.3295 | 234 | 0.4443 | 0.5619 | 0.4443 | 0.6666 |
| No log | 1.3409 | 236 | 0.4468 | 0.4576 | 0.4468 | 0.6684 |
| No log | 1.3523 | 238 | 0.4310 | 0.5055 | 0.4310 | 0.6565 |
| No log | 1.3636 | 240 | 0.4880 | 0.5656 | 0.4880 | 0.6986 |
| No log | 1.375 | 242 | 0.5594 | 0.5509 | 0.5594 | 0.7479 |
| No log | 1.3864 | 244 | 0.4259 | 0.5688 | 0.4259 | 0.6526 |
| No log | 1.3977 | 246 | 0.4739 | 0.4527 | 0.4739 | 0.6884 |
| No log | 1.4091 | 248 | 0.4907 | 0.4567 | 0.4907 | 0.7005 |
| No log | 1.4205 | 250 | 0.4267 | 0.5583 | 0.4267 | 0.6533 |
| No log | 1.4318 | 252 | 0.5439 | 0.5640 | 0.5439 | 0.7375 |
| No log | 1.4432 | 254 | 0.5187 | 0.5705 | 0.5187 | 0.7202 |
| No log | 1.4545 | 256 | 0.4470 | 0.5253 | 0.4470 | 0.6686 |
| No log | 1.4659 | 258 | 0.4574 | 0.4941 | 0.4574 | 0.6763 |
| No log | 1.4773 | 260 | 0.5256 | 0.5535 | 0.5256 | 0.7250 |
| No log | 1.4886 | 262 | 0.5484 | 0.5462 | 0.5484 | 0.7405 |
| No log | 1.5 | 264 | 0.5044 | 0.5133 | 0.5044 | 0.7102 |
| No log | 1.5114 | 266 | 0.5064 | 0.5246 | 0.5064 | 0.7116 |
| No log | 1.5227 | 268 | 0.5050 | 0.5502 | 0.5050 | 0.7106 |
| No log | 1.5341 | 270 | 0.4665 | 0.5550 | 0.4665 | 0.6830 |
| No log | 1.5455 | 272 | 0.4374 | 0.4714 | 0.4374 | 0.6613 |
| No log | 1.5568 | 274 | 0.4570 | 0.4437 | 0.4570 | 0.6760 |
| No log | 1.5682 | 276 | 0.4464 | 0.4714 | 0.4464 | 0.6681 |
| No log | 1.5795 | 278 | 0.4835 | 0.5696 | 0.4835 | 0.6954 |
| No log | 1.5909 | 280 | 0.5603 | 0.5725 | 0.5603 | 0.7485 |
| No log | 1.6023 | 282 | 0.4477 | 0.5621 | 0.4477 | 0.6691 |
| No log | 1.6136 | 284 | 0.4323 | 0.5453 | 0.4323 | 0.6575 |
| No log | 1.625 | 286 | 0.4296 | 0.4935 | 0.4296 | 0.6555 |
| No log | 1.6364 | 288 | 0.4263 | 0.5562 | 0.4263 | 0.6529 |
| No log | 1.6477 | 290 | 0.4743 | 0.5710 | 0.4743 | 0.6887 |
| No log | 1.6591 | 292 | 0.6933 | 0.5285 | 0.6933 | 0.8326 |
| No log | 1.6705 | 294 | 0.8348 | 0.4410 | 0.8348 | 0.9137 |
| No log | 1.6818 | 296 | 0.6704 | 0.4751 | 0.6704 | 0.8188 |
| No log | 1.6932 | 298 | 0.4691 | 0.5566 | 0.4691 | 0.6849 |
| No log | 1.7045 | 300 | 0.4348 | 0.5105 | 0.4348 | 0.6594 |
| No log | 1.7159 | 302 | 0.4306 | 0.4943 | 0.4306 | 0.6562 |
| No log | 1.7273 | 304 | 0.4527 | 0.5525 | 0.4527 | 0.6728 |
| No log | 1.7386 | 306 | 0.5002 | 0.5397 | 0.5002 | 0.7073 |
| No log | 1.75 | 308 | 0.4748 | 0.5560 | 0.4748 | 0.6890 |
| No log | 1.7614 | 310 | 0.4323 | 0.5137 | 0.4323 | 0.6575 |
| No log | 1.7727 | 312 | 0.4459 | 0.4801 | 0.4459 | 0.6678 |
| No log | 1.7841 | 314 | 0.5073 | 0.4369 | 0.5073 | 0.7123 |
| No log | 1.7955 | 316 | 0.4308 | 0.4933 | 0.4308 | 0.6563 |
| No log | 1.8068 | 318 | 0.4643 | 0.5502 | 0.4643 | 0.6814 |
| No log | 1.8182 | 320 | 0.4536 | 0.5647 | 0.4536 | 0.6735 |
| No log | 1.8295 | 322 | 0.4230 | 0.5384 | 0.4230 | 0.6504 |
| No log | 1.8409 | 324 | 0.4851 | 0.5827 | 0.4851 | 0.6965 |
| No log | 1.8523 | 326 | 0.5954 | 0.5775 | 0.5954 | 0.7716 |
| No log | 1.8636 | 328 | 0.4889 | 0.5917 | 0.4889 | 0.6992 |
| No log | 1.875 | 330 | 0.4052 | 0.5337 | 0.4052 | 0.6366 |
| No log | 1.8864 | 332 | 0.4054 | 0.5265 | 0.4054 | 0.6367 |
| No log | 1.8977 | 334 | 0.4181 | 0.5840 | 0.4181 | 0.6466 |
| No log | 1.9091 | 336 | 0.4680 | 0.6023 | 0.4680 | 0.6841 |
| No log | 1.9205 | 338 | 0.4088 | 0.5707 | 0.4088 | 0.6394 |
| No log | 1.9318 | 340 | 0.4054 | 0.5181 | 0.4054 | 0.6367 |
| No log | 1.9432 | 342 | 0.4072 | 0.5100 | 0.4072 | 0.6382 |
| No log | 1.9545 | 344 | 0.4149 | 0.5448 | 0.4149 | 0.6441 |
| No log | 1.9659 | 346 | 0.4438 | 0.5580 | 0.4438 | 0.6662 |
| No log | 1.9773 | 348 | 0.4241 | 0.5413 | 0.4241 | 0.6513 |
| No log | 1.9886 | 350 | 0.4129 | 0.5249 | 0.4129 | 0.6426 |
| No log | 2.0 | 352 | 0.4285 | 0.5538 | 0.4285 | 0.6546 |
| No log | 2.0114 | 354 | 0.5147 | 0.6375 | 0.5147 | 0.7174 |
| No log | 2.0227 | 356 | 0.4708 | 0.6128 | 0.4708 | 0.6862 |
| No log | 2.0341 | 358 | 0.4233 | 0.5582 | 0.4233 | 0.6506 |
| No log | 2.0455 | 360 | 0.4189 | 0.5425 | 0.4189 | 0.6472 |
| No log | 2.0568 | 362 | 0.4137 | 0.5602 | 0.4137 | 0.6432 |
| No log | 2.0682 | 364 | 0.4642 | 0.5999 | 0.4642 | 0.6813 |
| No log | 2.0795 | 366 | 0.5295 | 0.6367 | 0.5295 | 0.7277 |
| No log | 2.0909 | 368 | 0.4291 | 0.5931 | 0.4291 | 0.6550 |
| No log | 2.1023 | 370 | 0.4317 | 0.5376 | 0.4317 | 0.6571 |
| No log | 2.1136 | 372 | 0.4207 | 0.5464 | 0.4207 | 0.6486 |
| No log | 2.125 | 374 | 0.4535 | 0.5970 | 0.4535 | 0.6734 |
| No log | 2.1364 | 376 | 0.5130 | 0.6322 | 0.5130 | 0.7163 |
| No log | 2.1477 | 378 | 0.4364 | 0.5838 | 0.4364 | 0.6606 |
| No log | 2.1591 | 380 | 0.4412 | 0.5388 | 0.4412 | 0.6642 |
| No log | 2.1705 | 382 | 0.4346 | 0.5620 | 0.4346 | 0.6593 |
| No log | 2.1818 | 384 | 0.4287 | 0.5860 | 0.4287 | 0.6548 |
| No log | 2.1932 | 386 | 0.4245 | 0.5597 | 0.4245 | 0.6516 |
| No log | 2.2045 | 388 | 0.4245 | 0.5474 | 0.4245 | 0.6515 |
| No log | 2.2159 | 390 | 0.4223 | 0.5623 | 0.4223 | 0.6498 |
| No log | 2.2273 | 392 | 0.4731 | 0.6225 | 0.4731 | 0.6878 |
| No log | 2.2386 | 394 | 0.4599 | 0.6138 | 0.4599 | 0.6782 |
| No log | 2.25 | 396 | 0.4228 | 0.5635 | 0.4228 | 0.6503 |
| No log | 2.2614 | 398 | 0.4340 | 0.5315 | 0.4340 | 0.6588 |
| No log | 2.2727 | 400 | 0.4267 | 0.5617 | 0.4267 | 0.6532 |
| No log | 2.2841 | 402 | 0.5003 | 0.6471 | 0.5003 | 0.7073 |
| No log | 2.2955 | 404 | 0.4897 | 0.6522 | 0.4897 | 0.6998 |
| No log | 2.3068 | 406 | 0.4157 | 0.5735 | 0.4157 | 0.6448 |
| No log | 2.3182 | 408 | 0.4107 | 0.5737 | 0.4107 | 0.6409 |
| No log | 2.3295 | 410 | 0.4436 | 0.5886 | 0.4436 | 0.6660 |
| No log | 2.3409 | 412 | 0.5416 | 0.6547 | 0.5416 | 0.7359 |
| No log | 2.3523 | 414 | 0.4725 | 0.6328 | 0.4725 | 0.6874 |
| No log | 2.3636 | 416 | 0.4055 | 0.5980 | 0.4055 | 0.6368 |
| No log | 2.375 | 418 | 0.3982 | 0.5987 | 0.3982 | 0.6310 |
| No log | 2.3864 | 420 | 0.4090 | 0.6295 | 0.4090 | 0.6395 |
| No log | 2.3977 | 422 | 0.4356 | 0.6545 | 0.4356 | 0.6600 |
| No log | 2.4091 | 424 | 0.4198 | 0.5781 | 0.4198 | 0.6479 |
| No log | 2.4205 | 426 | 0.4198 | 0.5795 | 0.4198 | 0.6479 |
| No log | 2.4318 | 428 | 0.4591 | 0.6679 | 0.4591 | 0.6776 |
| No log | 2.4432 | 430 | 0.5155 | 0.6868 | 0.5155 | 0.7180 |
| No log | 2.4545 | 432 | 0.4627 | 0.6576 | 0.4627 | 0.6802 |
| No log | 2.4659 | 434 | 0.4127 | 0.6246 | 0.4127 | 0.6424 |
| No log | 2.4773 | 436 | 0.4452 | 0.6598 | 0.4452 | 0.6672 |
| No log | 2.4886 | 438 | 0.4397 | 0.6444 | 0.4397 | 0.6631 |
| No log | 2.5 | 440 | 0.4040 | 0.5826 | 0.4040 | 0.6356 |
| No log | 2.5114 | 442 | 0.4222 | 0.4983 | 0.4222 | 0.6498 |
| No log | 2.5227 | 444 | 0.4090 | 0.5726 | 0.4090 | 0.6395 |
| No log | 2.5341 | 446 | 0.5425 | 0.6562 | 0.5425 | 0.7365 |
| No log | 2.5455 | 448 | 0.5095 | 0.6544 | 0.5095 | 0.7138 |
| No log | 2.5568 | 450 | 0.4178 | 0.5298 | 0.4178 | 0.6464 |
| No log | 2.5682 | 452 | 0.5279 | 0.4212 | 0.5279 | 0.7266 |
| No log | 2.5795 | 454 | 0.4894 | 0.4311 | 0.4894 | 0.6996 |
| No log | 2.5909 | 456 | 0.4151 | 0.5623 | 0.4151 | 0.6443 |
| No log | 2.6023 | 458 | 0.4841 | 0.6187 | 0.4841 | 0.6957 |
| No log | 2.6136 | 460 | 0.4839 | 0.6172 | 0.4839 | 0.6956 |
| No log | 2.625 | 462 | 0.4316 | 0.5801 | 0.4316 | 0.6570 |
| No log | 2.6364 | 464 | 0.4264 | 0.5343 | 0.4264 | 0.6530 |
| No log | 2.6477 | 466 | 0.4482 | 0.5889 | 0.4482 | 0.6695 |
| No log | 2.6591 | 468 | 0.5486 | 0.6571 | 0.5486 | 0.7407 |
| No log | 2.6705 | 470 | 0.6269 | 0.6576 | 0.6269 | 0.7918 |
| No log | 2.6818 | 472 | 0.4849 | 0.6532 | 0.4849 | 0.6963 |
| No log | 2.6932 | 474 | 0.4197 | 0.5313 | 0.4197 | 0.6478 |
| No log | 2.7045 | 476 | 0.4206 | 0.5233 | 0.4206 | 0.6486 |
| No log | 2.7159 | 478 | 0.4207 | 0.6064 | 0.4207 | 0.6486 |
| No log | 2.7273 | 480 | 0.5097 | 0.6508 | 0.5097 | 0.7139 |
| No log | 2.7386 | 482 | 0.5067 | 0.6366 | 0.5067 | 0.7118 |
| No log | 2.75 | 484 | 0.4216 | 0.5987 | 0.4216 | 0.6493 |
| No log | 2.7614 | 486 | 0.4219 | 0.5224 | 0.4219 | 0.6495 |
| No log | 2.7727 | 488 | 0.4110 | 0.5702 | 0.4110 | 0.6411 |
| No log | 2.7841 | 490 | 0.4821 | 0.6273 | 0.4821 | 0.6943 |
| No log | 2.7955 | 492 | 0.5594 | 0.6253 | 0.5594 | 0.7479 |
| No log | 2.8068 | 494 | 0.4782 | 0.6201 | 0.4782 | 0.6915 |
| No log | 2.8182 | 496 | 0.4121 | 0.5825 | 0.4121 | 0.6419 |
| No log | 2.8295 | 498 | 0.4114 | 0.5867 | 0.4114 | 0.6414 |
| 0.5515 | 2.8409 | 500 | 0.4420 | 0.6076 | 0.4420 | 0.6649 |
| 0.5515 | 2.8523 | 502 | 0.5083 | 0.6373 | 0.5083 | 0.7130 |
| 0.5515 | 2.8636 | 504 | 0.4545 | 0.6369 | 0.4545 | 0.6741 |
| 0.5515 | 2.875 | 506 | 0.4052 | 0.5839 | 0.4052 | 0.6366 |
| 0.5515 | 2.8864 | 508 | 0.4128 | 0.5968 | 0.4128 | 0.6425 |
| 0.5515 | 2.8977 | 510 | 0.4220 | 0.5989 | 0.4220 | 0.6496 |
| 0.5515 | 2.9091 | 512 | 0.4134 | 0.5743 | 0.4134 | 0.6429 |
| 0.5515 | 2.9205 | 514 | 0.4391 | 0.6246 | 0.4391 | 0.6626 |
| 0.5515 | 2.9318 | 516 | 0.4337 | 0.5948 | 0.4337 | 0.6586 |
| 0.5515 | 2.9432 | 518 | 0.4202 | 0.5750 | 0.4202 | 0.6483 |
| 0.5515 | 2.9545 | 520 | 0.4245 | 0.5818 | 0.4245 | 0.6515 |
| 0.5515 | 2.9659 | 522 | 0.5347 | 0.6177 | 0.5347 | 0.7312 |
| 0.5515 | 2.9773 | 524 | 0.6007 | 0.6429 | 0.6007 | 0.7750 |
| 0.5515 | 2.9886 | 526 | 0.5066 | 0.6241 | 0.5066 | 0.7118 |
| 0.5515 | 3.0 | 528 | 0.4101 | 0.5718 | 0.4101 | 0.6404 |
| 0.5515 | 3.0114 | 530 | 0.4404 | 0.4904 | 0.4404 | 0.6636 |
| 0.5515 | 3.0227 | 532 | 0.4371 | 0.5097 | 0.4371 | 0.6612 |
| 0.5515 | 3.0341 | 534 | 0.4178 | 0.6007 | 0.4178 | 0.6464 |
| 0.5515 | 3.0455 | 536 | 0.4557 | 0.6513 | 0.4557 | 0.6751 |
| 0.5515 | 3.0568 | 538 | 0.4464 | 0.6436 | 0.4464 | 0.6682 |
| 0.5515 | 3.0682 | 540 | 0.4390 | 0.6308 | 0.4390 | 0.6625 |
| 0.5515 | 3.0795 | 542 | 0.4745 | 0.6517 | 0.4745 | 0.6888 |
| 0.5515 | 3.0909 | 544 | 0.5399 | 0.6864 | 0.5399 | 0.7348 |
| 0.5515 | 3.1023 | 546 | 0.5331 | 0.6786 | 0.5331 | 0.7301 |
| 0.5515 | 3.1136 | 548 | 0.4643 | 0.6357 | 0.4643 | 0.6814 |
| 0.5515 | 3.125 | 550 | 0.4586 | 0.6252 | 0.4586 | 0.6772 |
| 0.5515 | 3.1364 | 552 | 0.5157 | 0.6714 | 0.5157 | 0.7181 |
| 0.5515 | 3.1477 | 554 | 0.6444 | 0.6655 | 0.6444 | 0.8027 |
| 0.5515 | 3.1591 | 556 | 0.5704 | 0.6742 | 0.5704 | 0.7553 |
| 0.5515 | 3.1705 | 558 | 0.4357 | 0.6166 | 0.4357 | 0.6601 |
| 0.5515 | 3.1818 | 560 | 0.4685 | 0.4991 | 0.4685 | 0.6845 |
| 0.5515 | 3.1932 | 562 | 0.4718 | 0.4991 | 0.4718 | 0.6869 |
| 0.5515 | 3.2045 | 564 | 0.4272 | 0.6083 | 0.4272 | 0.6536 |
| 0.5515 | 3.2159 | 566 | 0.5600 | 0.6719 | 0.5600 | 0.7484 |
| 0.5515 | 3.2273 | 568 | 0.6114 | 0.6736 | 0.6114 | 0.7819 |
| 0.5515 | 3.2386 | 570 | 0.4850 | 0.6383 | 0.4850 | 0.6964 |
| 0.5515 | 3.25 | 572 | 0.4213 | 0.6068 | 0.4213 | 0.6491 |
| 0.5515 | 3.2614 | 574 | 0.4294 | 0.6174 | 0.4294 | 0.6553 |
| 0.5515 | 3.2727 | 576 | 0.5292 | 0.6644 | 0.5292 | 0.7275 |
| 0.5515 | 3.2841 | 578 | 0.5770 | 0.6604 | 0.5770 | 0.7596 |
| 0.5515 | 3.2955 | 580 | 0.5177 | 0.6372 | 0.5177 | 0.7195 |
| 0.5515 | 3.3068 | 582 | 0.4914 | 0.6461 | 0.4914 | 0.7010 |
| 0.5515 | 3.3182 | 584 | 0.4538 | 0.6445 | 0.4538 | 0.6737 |
| 0.5515 | 3.3295 | 586 | 0.4978 | 0.6779 | 0.4978 | 0.7055 |
| 0.5515 | 3.3409 | 588 | 0.5327 | 0.6939 | 0.5327 | 0.7299 |
| 0.5515 | 3.3523 | 590 | 0.4697 | 0.6787 | 0.4697 | 0.6853 |
| 0.5515 | 3.3636 | 592 | 0.4858 | 0.6824 | 0.4858 | 0.6970 |
| 0.5515 | 3.375 | 594 | 0.6520 | 0.6913 | 0.6520 | 0.8075 |
| 0.5515 | 3.3864 | 596 | 0.6282 | 0.6994 | 0.6282 | 0.7926 |
| 0.5515 | 3.3977 | 598 | 0.4539 | 0.6540 | 0.4539 | 0.6738 |
| 0.5515 | 3.4091 | 600 | 0.4139 | 0.5978 | 0.4139 | 0.6434 |
| 0.5515 | 3.4205 | 602 | 0.4109 | 0.5851 | 0.4109 | 0.6410 |
| 0.5515 | 3.4318 | 604 | 0.4338 | 0.6390 | 0.4338 | 0.6586 |
| 0.5515 | 3.4432 | 606 | 0.4379 | 0.6491 | 0.4379 | 0.6617 |
| 0.5515 | 3.4545 | 608 | 0.4174 | 0.5518 | 0.4174 | 0.6460 |
| 0.5515 | 3.4659 | 610 | 0.4213 | 0.5269 | 0.4213 | 0.6491 |
| 0.5515 | 3.4773 | 612 | 0.4175 | 0.5813 | 0.4175 | 0.6462 |
| 0.5515 | 3.4886 | 614 | 0.4648 | 0.6417 | 0.4648 | 0.6817 |
| 0.5515 | 3.5 | 616 | 0.4579 | 0.6286 | 0.4579 | 0.6767 |
| 0.5515 | 3.5114 | 618 | 0.4182 | 0.5457 | 0.4182 | 0.6467 |
| 0.5515 | 3.5227 | 620 | 0.4555 | 0.4898 | 0.4555 | 0.6749 |
| 0.5515 | 3.5341 | 622 | 0.4311 | 0.4918 | 0.4311 | 0.6566 |
| 0.5515 | 3.5455 | 624 | 0.4305 | 0.5557 | 0.4305 | 0.6561 |
| 0.5515 | 3.5568 | 626 | 0.5087 | 0.6284 | 0.5087 | 0.7133 |
| 0.5515 | 3.5682 | 628 | 0.5067 | 0.6341 | 0.5067 | 0.7119 |
| 0.5515 | 3.5795 | 630 | 0.4607 | 0.6148 | 0.4607 | 0.6788 |
| 0.5515 | 3.5909 | 632 | 0.5238 | 0.6355 | 0.5238 | 0.7237 |
| 0.5515 | 3.6023 | 634 | 0.6741 | 0.6652 | 0.6741 | 0.8210 |
| 0.5515 | 3.6136 | 636 | 0.5839 | 0.6365 | 0.5839 | 0.7641 |
| 0.5515 | 3.625 | 638 | 0.4603 | 0.5717 | 0.4603 | 0.6785 |
| 0.5515 | 3.6364 | 640 | 0.4469 | 0.5518 | 0.4469 | 0.6685 |
| 0.5515 | 3.6477 | 642 | 0.4473 | 0.6080 | 0.4473 | 0.6688 |
| 0.5515 | 3.6591 | 644 | 0.5201 | 0.6328 | 0.5201 | 0.7212 |
| 0.5515 | 3.6705 | 646 | 0.4835 | 0.6277 | 0.4835 | 0.6953 |
| 0.5515 | 3.6818 | 648 | 0.4125 | 0.5875 | 0.4125 | 0.6422 |
| 0.5515 | 3.6932 | 650 | 0.4335 | 0.5037 | 0.4335 | 0.6584 |
| 0.5515 | 3.7045 | 652 | 0.4219 | 0.5080 | 0.4219 | 0.6495 |
| 0.5515 | 3.7159 | 654 | 0.4187 | 0.5952 | 0.4187 | 0.6470 |
| 0.5515 | 3.7273 | 656 | 0.5018 | 0.6299 | 0.5018 | 0.7084 |
| 0.5515 | 3.7386 | 658 | 0.5551 | 0.6382 | 0.5551 | 0.7450 |
| 0.5515 | 3.75 | 660 | 0.5193 | 0.6183 | 0.5193 | 0.7206 |
| 0.5515 | 3.7614 | 662 | 0.5127 | 0.6302 | 0.5127 | 0.7160 |
| 0.5515 | 3.7727 | 664 | 0.4472 | 0.6253 | 0.4472 | 0.6688 |
| 0.5515 | 3.7841 | 666 | 0.4344 | 0.6137 | 0.4344 | 0.6591 |
| 0.5515 | 3.7955 | 668 | 0.4824 | 0.6344 | 0.4824 | 0.6946 |
| 0.5515 | 3.8068 | 670 | 0.5332 | 0.6432 | 0.5332 | 0.7302 |
| 0.5515 | 3.8182 | 672 | 0.5040 | 0.6515 | 0.5040 | 0.7099 |
| 0.5515 | 3.8295 | 674 | 0.4215 | 0.6303 | 0.4215 | 0.6492 |
| 0.5515 | 3.8409 | 676 | 0.4190 | 0.6020 | 0.4190 | 0.6473 |
| 0.5515 | 3.8523 | 678 | 0.4520 | 0.6422 | 0.4520 | 0.6723 |
| 0.5515 | 3.8636 | 680 | 0.5780 | 0.6779 | 0.5780 | 0.7603 |
| 0.5515 | 3.875 | 682 | 0.5425 | 0.6861 | 0.5425 | 0.7365 |
| 0.5515 | 3.8864 | 684 | 0.4440 | 0.6037 | 0.4440 | 0.6664 |
| 0.5515 | 3.8977 | 686 | 0.4485 | 0.5657 | 0.4485 | 0.6697 |
| 0.5515 | 3.9091 | 688 | 0.4628 | 0.6286 | 0.4628 | 0.6803 |
| 0.5515 | 3.9205 | 690 | 0.5195 | 0.6775 | 0.5195 | 0.7208 |
| 0.5515 | 3.9318 | 692 | 0.5554 | 0.6780 | 0.5554 | 0.7453 |
| 0.5515 | 3.9432 | 694 | 0.5013 | 0.6453 | 0.5013 | 0.7080 |
| 0.5515 | 3.9545 | 696 | 0.4499 | 0.6046 | 0.4499 | 0.6708 |
| 0.5515 | 3.9659 | 698 | 0.4549 | 0.5181 | 0.4549 | 0.6745 |
| 0.5515 | 3.9773 | 700 | 0.4425 | 0.5636 | 0.4425 | 0.6652 |
| 0.5515 | 3.9886 | 702 | 0.4608 | 0.6359 | 0.4608 | 0.6788 |
| 0.5515 | 4.0 | 704 | 0.5322 | 0.6808 | 0.5322 | 0.7295 |
| 0.5515 | 4.0114 | 706 | 0.4938 | 0.6654 | 0.4938 | 0.7027 |
| 0.5515 | 4.0227 | 708 | 0.4800 | 0.6531 | 0.4800 | 0.6928 |
| 0.5515 | 4.0341 | 710 | 0.4941 | 0.6650 | 0.4941 | 0.7029 |
| 0.5515 | 4.0455 | 712 | 0.4401 | 0.6222 | 0.4401 | 0.6634 |
| 0.5515 | 4.0568 | 714 | 0.4169 | 0.5819 | 0.4169 | 0.6457 |
| 0.5515 | 4.0682 | 716 | 0.4217 | 0.6044 | 0.4217 | 0.6494 |
| 0.5515 | 4.0795 | 718 | 0.4778 | 0.6594 | 0.4778 | 0.6912 |
| 0.5515 | 4.0909 | 720 | 0.4675 | 0.6598 | 0.4675 | 0.6837 |
| 0.5515 | 4.1023 | 722 | 0.4751 | 0.6610 | 0.4751 | 0.6893 |
| 0.5515 | 4.1136 | 724 | 0.4295 | 0.6205 | 0.4295 | 0.6554 |
| 0.5515 | 4.125 | 726 | 0.4494 | 0.6326 | 0.4494 | 0.6704 |
| 0.5515 | 4.1364 | 728 | 0.5310 | 0.6796 | 0.5310 | 0.7287 |
| 0.5515 | 4.1477 | 730 | 0.6485 | 0.7084 | 0.6485 | 0.8053 |
| 0.5515 | 4.1591 | 732 | 0.5901 | 0.6992 | 0.5901 | 0.7682 |
| 0.5515 | 4.1705 | 734 | 0.4429 | 0.6091 | 0.4429 | 0.6655 |
| 0.5515 | 4.1818 | 736 | 0.4461 | 0.5303 | 0.4461 | 0.6679 |
| 0.5515 | 4.1932 | 738 | 0.4385 | 0.5652 | 0.4385 | 0.6622 |
| 0.5515 | 4.2045 | 740 | 0.5146 | 0.6285 | 0.5146 | 0.7174 |
| 0.5515 | 4.2159 | 742 | 0.7075 | 0.6704 | 0.7075 | 0.8411 |
| 0.5515 | 4.2273 | 744 | 0.6921 | 0.6525 | 0.6921 | 0.8319 |
| 0.5515 | 4.2386 | 746 | 0.5222 | 0.6098 | 0.5222 | 0.7226 |
| 0.5515 | 4.25 | 748 | 0.4937 | 0.5976 | 0.4937 | 0.7026 |
| 0.5515 | 4.2614 | 750 | 0.4898 | 0.6005 | 0.4898 | 0.6998 |
| 0.5515 | 4.2727 | 752 | 0.4578 | 0.5935 | 0.4578 | 0.6766 |
| 0.5515 | 4.2841 | 754 | 0.4448 | 0.5578 | 0.4448 | 0.6669 |
| 0.5515 | 4.2955 | 756 | 0.4549 | 0.5672 | 0.4549 | 0.6745 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ooSGoo/TEST_SG_241105
|
ooSGoo
| 2024-11-06T00:08:50Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2024-11-05T14:02:00Z |
---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
|
Lyte
| 2024-11-06T00:02:38Z | 413 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"Llama-3.1-8B",
"conversational",
"en",
"dataset:Lyte/Reasoner-1o1-v0.3-HQ",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-17T00:04:28Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- Llama-3.1-8B
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
datasets:
- Lyte/Reasoner-1o1-v0.3-HQ
widget:
- example_title: HELP its a Llama
messages:
- role: user
content: There's a llama on my lawn, how can I get rid of him?
pipeline_tag: text-generation
model-index:
- name: Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 70.98
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 27.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 14.8
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.68
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.9
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.09
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3
name: Open LLM Leaderboard
---
# Uploaded model
- **NOTE:** This model is just an *experiment* to make the model generate more tokens to do reasoning before providing an answer, with verifier and correction, this is just a proof of concept because literally, no model will show improvements in performance from such a tiny dataset(that doesn't target any specific knowledge) it may even degrade but the point wasn't to improve performance but to have it learn to "reason" because reaching SOTA in benchmarks does not equal "reasoning".
- **Demo:** try Q4_K_M [here](https://huggingface.co/spaces/Lyte/Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3-Q4_K_M)
- **Developed by:** Lyte
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
# Prompt
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a world-class AI system, capable of complex reasoning and reflection and correcting your mistakes. Reason through the query/question, and then provide your final response. If you detect that you made a mistake in your reasoning at any point, correct yourself.<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{response}
```
# Example(0-shot):
* the reason we keep seeing the correct word "strawberry" written again and again is simply a tokenizer issue. However, it did understand how to count correctly towards the end by saying, (the correct count is 4 'r's: one in "ar", three in "err"). The reason for "err" instead of "errr" is because of tokenization.

# Benchmark Scores
* Note: Evals were ran with and without the system prompt that was used in the finetuning.
| Task/Group | Metric | With Prompt | Without Prompt | Difference |
|---------------------------------------|------------|-------------|----------------|------------|
| arc_challenge | acc | 51.37% | 43.77% | +7.60% |
| | acc_norm | 53.67% | 46.42% | +7.25% |
| arc_easy | acc | 81.99% | 73.11% | +8.88% |
| | acc_norm | 79.42% | 64.98% | +14.44% |
| commonsense_qa | acc | 76.00% | 72.73% | +3.27% |
| gsm8k (flexible-extract) | exact_match| 74.91% | 76.57% | -1.66% |
| gsm8k (strict-match) | exact_match| 73.92% | 75.97% | -2.05% |
| hellaswag | acc | 59.01% | 58.87% | +0.14% |
| | acc_norm | 77.98% | 77.32% | +0.66% |
| mmlu (overall) | acc | 66.06% | 65.45% | +0.61% |
| mmlu - humanities | acc | 61.47% | 61.38% | +0.09% |
| mmlu - other | acc | 72.84% | 72.16% | +0.68% |
| mmlu - social sciences | acc | 75.14% | 73.94% | +1.20% |
| mmlu - stem | acc | 57.37% | 56.61% | +0.76% |
| piqa | acc | 79.49% | 78.45% | +1.04% |
| | acc_norm | 80.47% | 78.73% | +1.74% |
# Compared to the original Llama-3.1-8B-Instruct:
| Task/Benchmark | Metric | Llama-3.1-8B-Instruct | Finetuned Model | Difference |
|--------------------------|------------|----------------------:|----------------:|-----------:|
| MMLU | acc | 69.40% | 66.06% | -3.34% |
| ARC-Challenge | acc | 83.40% | 51.37% | -32.03% |
| CommonSenseQA | acc | 75.00%* | 76.00% | +1.00% |
| GSM-8K | exact_match| 84.50% | 74.91% | -9.59% |
* Note: For Llama-3.1-8B-Instruct, the CommonSenseQA score is from the base model, not the instruct version. The -32.03% drop is very bad i have no idea if it's the finetuning that messed it up or difference in evals, but take it as you will, i did not plan to benchmark anything but oh well people won't stop asking to benchmark an experimental model(can you even properly benchmark the more tokens to do "reasoning"? i probably needed to adjust temperature to really make use of the model)...
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Lyte__Llama-3.1-8B-Instruct-Reasoner-1o1_v0.3)
| Metric |Value|
|-------------------|----:|
|Avg. |25.05|
|IFEval (0-Shot) |70.98|
|BBH (3-Shot) |27.84|
|MATH Lvl 5 (4-Shot)|14.80|
|GPQA (0-shot) | 2.68|
|MuSR (0-shot) | 4.90|
|MMLU-PRO (5-shot) |29.09|
|
tencent-community/Hunyuan-A52B-Instruct-FP8
|
tencent-community
| 2024-11-05T23:35:19Z | 47 | 1 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:2411.02265",
"autotrain_compatible",
"fp8",
"region:us"
] |
text-generation
| 2024-11-05T13:33:28Z |
---
license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/blob/main/LICENSE.txt
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
The original repo is here: https://huggingface.co/tencent/Tencent-Hunyuan-Large
This is the Hunyuan-A52B-Instruct-FP8 model uploaded into its own repository.
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
### Model Introduction
With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters.
By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
### Introduction to Model Technical Advantages
#### Model
- **High-Quality Synthetic Data**: By enhancing training with synthetic data, Hunyuan-Large can learn richer representations, handle long-context inputs, and generalize better to unseen data.
- **KV Cache Compression**: Utilizes Grouped Query Attention (GQA) and Cross-Layer Attention (CLA) strategies to significantly reduce memory usage and computational overhead of KV caches, improving inference throughput.
- **Expert-Specific Learning Rate Scaling**: Sets different learning rates for different experts to ensure each sub-model effectively learns from the data and contributes to overall performance.
- **Long-Context Processing Capability**: The pre-trained model supports text sequences up to 256K, and the Instruct model supports up to 128K, significantly enhancing the ability to handle long-context tasks.
- **Extensive Benchmarking**: Conducts extensive experiments across various languages and tasks to validate the practical effectiveness and safety of Hunyuan-Large.
## Benchmark Evaluation
**Hunyuan-Large pre-trained model** achieves the best overall performance compared to both Dense and MoE based
competitors having similar activated parameter sizes. For aggregated benchmarks such as MMLU, MMLU-Pro, and CMMLU,
Hunyuan-Large consistently achieves the best performance, confirming its comprehensive abilities on aggregated tasks.
Hunyuan-Large also shows superior performance in commonsense understanding and reasoning, and classical NLP tasks
such as QA and reading comprehension tasks (e.g., CommonsenseQA, PIQA and TriviaQA).
For the mathematics capability, Hunyuan-Large outperforms all baselines in math datasets of GSM8K and MATH,
and also gains the best results on CMATH in Chinese.We also observe that Hunyuan-Large achieves the overall
best performance in all Chinese tasks (e.g., CMMLU, C-Eval).
| Model | LLama3.1-405B | LLama3.1-70B | Mixtral-8x22B | DeepSeek-V2 | Hunyuan-Large |
|------------------|---------------|--------------|---------------|-------------|---------------|
| MMLU | 85.2 | 79.3 | 77.8 | 78.5 | **88.4** |
| MMLU-Pro | **61.6** | 53.8 | 49.5 | - | 60.2 |
| BBH | 85.9 | 81.6 | 78.9 | 78.9 | **86.3** |
| HellaSwag | - | - | **88.7** | 87.8 | 86.8 |
| CommonsenseQA | 85.8 | 84.1 | 82.4 | - | **92.9** |
| WinoGrande | 86.7 | 85.3 | 85.0 | 84.9 | **88.7** |
| PIQA | - | - | 83.6 | 83.7 | **88.3** |
| NaturalQuestions | - | - | 39.6 | 38.7 | **52.8** |
| DROP | 84.8 | 79.6 | 80.4 | 80.1 | **88.9** |
| ARC-C | **96.1** | 92.9 | 91.2 | 92.4 | 95.0 |
| TriviaQA | - | - | 82.1 | 79.9 | **89.2** |
| CMMLU | - | - | 60.0 | 84.0 | **90.2** |
| C-Eval | - | - | 59.6 | 81.7 | **91.9** |
| C3 | - | - | 71.4 | 77.4 | **82.3** |
| GSM8K | 89.0 | 83.7 | 83.7 | 79.2 | **92.8** |
| MATH | 53.8 | 41.4 | 42.5 | 43.6 | **69.8** |
| CMATH | - | - | 72.3 | 78.7 | **91.3** |
| HumanEval | 61.0 | 58.5 | 53.1 | 48.8 | **71.4** |
| MBPP | **73.4** | 68.6 | 64.2 | 66.6 | 72.6 |
**Hunyuan-Large-Instruct** achieves consistent improvements on most types of tasks compared to LLMs having similar
activated parameters, indicating the effectiveness of our post-training. Delving into the model performance
in different categories of benchmarks, we find that our instruct model achieves the best performance on MMLU and MATH dataset.
Notably, on the MMLU dataset, our model demonstrates a significant improvement, outperforming the LLama3.1-405B model by 2.6%.
This enhancement is not just marginal but indicative of the Hunyuan-Large-Instruct’s superior understanding and reasoning
capabilities across a wide array of language understanding tasks. The model’s prowess is further underscored in its performance
on the MATH dataset, where it surpasses the LLama3.1-405B by a notable margin of 3.6%.
Remarkably, this leap in accuracy is achieved with only 52 billion activated parameters, underscoring the efficiency of our model.
| Model | LLama3.1 405B Inst. | LLama3.1 70B Inst. | Mixtral 8x22B Inst. | DeepSeekV2.5 Chat | Hunyuan-Large Inst. |
|----------------------|---------------------|--------------------|---------------------|-------------------|---------------------|
| MMLU | 87.3 | 83.6 | 77.8 | 80.4 | **89.9** |
| CMMLU | - | - | 61.0 | - | **90.4** |
| C-Eval | - | - | 60.0 | - | **88.6** |
| BBH | - | - | 78.4 | 84.3 | **89.5** |
| HellaSwag | - | - | 86.0 | **90.3** | 88.5 |
| ARC-C | **96.9** | 94.8 | 90.0 | - | 94.6 |
| GPQA_diamond | **51.1** | 46.7 | - | - | 42.4 |
| MATH | 73.8 | 68.0 | 49.8 | 74.7 | **77.4** |
| HumanEval | 89.0 | 80.5 | 75.0 | 89.0 | **90.0** |
| AlignBench | 6.0 | 5.9 | 6.2 | 8.0 | **8.3** |
| MT-Bench | 9.1 | 8.8 | 8.1 | 9.0 | **9.4** |
| IFEval strict-prompt | **86.0** | 83.6 | 71.2 | - | 85.0 |
| Arena-Hard | 69.3 | 55.7 | - | 76.2 | **81.8** |
| AlpacaEval-2.0 | 39.3 | 34.3 | 30.9 | 50.5 | **51.8** |
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{sun2024hunyuanlargeopensourcemoemodel,
title={Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent},
author={Xingwu Sun and Yanfeng Chen and Yiqing Huang and Ruobing Xie and Jiaqi Zhu and Kai Zhang and Shuaipeng Li and Zhen Yang and Jonny Han and Xiaobo Shu and Jiahao Bu and Zhongzhi Chen and Xuemeng Huang and Fengzong Lian and Saiyong Yang and Jianfeng Yan and Yuyuan Zeng and Xiaoqin Ren and Chao Yu and Lulu Wu and Yue Mao and Tao Yang and Suncong Zheng and Kan Wu and Dian Jiao and Jinbao Xue and Xipeng Zhang and Decheng Wu and Kai Liu and Dengpeng Wu and Guanghui Xu and Shaohua Chen and Shuang Chen and Xiao Feng and Yigeng Hong and Junqiang Zheng and Chengcheng Xu and Zongwei Li and Xiong Kuang and Jianglu Hu and Yiqi Chen and Yuchi Deng and Guiyang Li and Ao Liu and Chenchen Zhang and Shihui Hu and Zilong Zhao and Zifan Wu and Yao Ding and Weichao Wang and Han Liu and Roberts Wang and Hao Fei and Peijie She and Ze Zhao and Xun Cao and Hai Wang and Fusheng Xiang and Mengyuan Huang and Zhiyuan Xiong and Bin Hu and Xuebin Hou and Lei Jiang and Jiajia Wu and Yaping Deng and Yi Shen and Qian Wang and Weijie Liu and Jie Liu and Meng Chen and Liang Dong and Weiwen Jia and Hu Chen and Feifei Liu and Rui Yuan and Huilin Xu and Zhenxiang Yan and Tengfei Cao and Zhichao Hu and Xinhua Feng and Dong Du and Tinghao She and Yangyu Tao and Feng Zhang and Jianchen Zhu and Chengzhong Xu and Xirui Li and Chong Zha and Wen Ouyang and Yinben Xia and Xiang Li and Zekun He and Rongpeng Chen and Jiawei Song and Ruibin Chen and Fan Jiang and Chongqing Zhao and Bo Wang and Hao Gong and Rong Gan and Winston Hu and Zhanhui Kang and Yong Yang and Yuhong Liu and Di Wang and Jie Jiang},
year={2024},
eprint={2411.02265},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.02265},
}
```
|
tencent-community/Hunyuan-A52B-Instruct-original
|
tencent-community
| 2024-11-05T23:31:38Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hunyuan",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:2411.02265",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-11-05T13:34:00Z |
---
license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/blob/main/LICENSE.txt
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
The original repo is here: https://huggingface.co/tencent/Tencent-Hunyuan-Large
This is the Hunyuan-A52B-Instruct model uploaded into its own repository.
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
### Model Introduction
With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 52 billion active parameters.
By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
### Introduction to Model Technical Advantages
#### Model
- **High-Quality Synthetic Data**: By enhancing training with synthetic data, Hunyuan-Large can learn richer representations, handle long-context inputs, and generalize better to unseen data.
- **KV Cache Compression**: Utilizes Grouped Query Attention (GQA) and Cross-Layer Attention (CLA) strategies to significantly reduce memory usage and computational overhead of KV caches, improving inference throughput.
- **Expert-Specific Learning Rate Scaling**: Sets different learning rates for different experts to ensure each sub-model effectively learns from the data and contributes to overall performance.
- **Long-Context Processing Capability**: The pre-trained model supports text sequences up to 256K, and the Instruct model supports up to 128K, significantly enhancing the ability to handle long-context tasks.
- **Extensive Benchmarking**: Conducts extensive experiments across various languages and tasks to validate the practical effectiveness and safety of Hunyuan-Large.
## Benchmark Evaluation
**Hunyuan-Large pre-trained model** achieves the best overall performance compared to both Dense and MoE based
competitors having similar activated parameter sizes. For aggregated benchmarks such as MMLU, MMLU-Pro, and CMMLU,
Hunyuan-Large consistently achieves the best performance, confirming its comprehensive abilities on aggregated tasks.
Hunyuan-Large also shows superior performance in commonsense understanding and reasoning, and classical NLP tasks
such as QA and reading comprehension tasks (e.g., CommonsenseQA, PIQA and TriviaQA).
For the mathematics capability, Hunyuan-Large outperforms all baselines in math datasets of GSM8K and MATH,
and also gains the best results on CMATH in Chinese.We also observe that Hunyuan-Large achieves the overall
best performance in all Chinese tasks (e.g., CMMLU, C-Eval).
| Model | LLama3.1-405B | LLama3.1-70B | Mixtral-8x22B | DeepSeek-V2 | Hunyuan-Large |
|------------------|---------------|--------------|---------------|-------------|---------------|
| MMLU | 85.2 | 79.3 | 77.8 | 78.5 | **88.4** |
| MMLU-Pro | **61.6** | 53.8 | 49.5 | - | 60.2 |
| BBH | 85.9 | 81.6 | 78.9 | 78.9 | **86.3** |
| HellaSwag | - | - | **88.7** | 87.8 | 86.8 |
| CommonsenseQA | 85.8 | 84.1 | 82.4 | - | **92.9** |
| WinoGrande | 86.7 | 85.3 | 85.0 | 84.9 | **88.7** |
| PIQA | - | - | 83.6 | 83.7 | **88.3** |
| NaturalQuestions | - | - | 39.6 | 38.7 | **52.8** |
| DROP | 84.8 | 79.6 | 80.4 | 80.1 | **88.9** |
| ARC-C | **96.1** | 92.9 | 91.2 | 92.4 | 95.0 |
| TriviaQA | - | - | 82.1 | 79.9 | **89.2** |
| CMMLU | - | - | 60.0 | 84.0 | **90.2** |
| C-Eval | - | - | 59.6 | 81.7 | **91.9** |
| C3 | - | - | 71.4 | 77.4 | **82.3** |
| GSM8K | 89.0 | 83.7 | 83.7 | 79.2 | **92.8** |
| MATH | 53.8 | 41.4 | 42.5 | 43.6 | **69.8** |
| CMATH | - | - | 72.3 | 78.7 | **91.3** |
| HumanEval | 61.0 | 58.5 | 53.1 | 48.8 | **71.4** |
| MBPP | **73.4** | 68.6 | 64.2 | 66.6 | 72.6 |
**Hunyuan-Large-Instruct** achieves consistent improvements on most types of tasks compared to LLMs having similar
activated parameters, indicating the effectiveness of our post-training. Delving into the model performance
in different categories of benchmarks, we find that our instruct model achieves the best performance on MMLU and MATH dataset.
Notably, on the MMLU dataset, our model demonstrates a significant improvement, outperforming the LLama3.1-405B model by 2.6%.
This enhancement is not just marginal but indicative of the Hunyuan-Large-Instruct’s superior understanding and reasoning
capabilities across a wide array of language understanding tasks. The model’s prowess is further underscored in its performance
on the MATH dataset, where it surpasses the LLama3.1-405B by a notable margin of 3.6%.
Remarkably, this leap in accuracy is achieved with only 52 billion activated parameters, underscoring the efficiency of our model.
| Model | LLama3.1 405B Inst. | LLama3.1 70B Inst. | Mixtral 8x22B Inst. | DeepSeekV2.5 Chat | Hunyuan-Large Inst. |
|----------------------|---------------------|--------------------|---------------------|-------------------|---------------------|
| MMLU | 87.3 | 83.6 | 77.8 | 80.4 | **89.9** |
| CMMLU | - | - | 61.0 | - | **90.4** |
| C-Eval | - | - | 60.0 | - | **88.6** |
| BBH | - | - | 78.4 | 84.3 | **89.5** |
| HellaSwag | - | - | 86.0 | **90.3** | 88.5 |
| ARC-C | **96.9** | 94.8 | 90.0 | - | 94.6 |
| GPQA_diamond | **51.1** | 46.7 | - | - | 42.4 |
| MATH | 73.8 | 68.0 | 49.8 | 74.7 | **77.4** |
| HumanEval | 89.0 | 80.5 | 75.0 | 89.0 | **90.0** |
| AlignBench | 6.0 | 5.9 | 6.2 | 8.0 | **8.3** |
| MT-Bench | 9.1 | 8.8 | 8.1 | 9.0 | **9.4** |
| IFEval strict-prompt | **86.0** | 83.6 | 71.2 | - | 85.0 |
| Arena-Hard | 69.3 | 55.7 | - | 76.2 | **81.8** |
| AlpacaEval-2.0 | 39.3 | 34.3 | 30.9 | 50.5 | **51.8** |
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{sun2024hunyuanlargeopensourcemoemodel,
title={Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent},
author={Xingwu Sun and Yanfeng Chen and Yiqing Huang and Ruobing Xie and Jiaqi Zhu and Kai Zhang and Shuaipeng Li and Zhen Yang and Jonny Han and Xiaobo Shu and Jiahao Bu and Zhongzhi Chen and Xuemeng Huang and Fengzong Lian and Saiyong Yang and Jianfeng Yan and Yuyuan Zeng and Xiaoqin Ren and Chao Yu and Lulu Wu and Yue Mao and Tao Yang and Suncong Zheng and Kan Wu and Dian Jiao and Jinbao Xue and Xipeng Zhang and Decheng Wu and Kai Liu and Dengpeng Wu and Guanghui Xu and Shaohua Chen and Shuang Chen and Xiao Feng and Yigeng Hong and Junqiang Zheng and Chengcheng Xu and Zongwei Li and Xiong Kuang and Jianglu Hu and Yiqi Chen and Yuchi Deng and Guiyang Li and Ao Liu and Chenchen Zhang and Shihui Hu and Zilong Zhao and Zifan Wu and Yao Ding and Weichao Wang and Han Liu and Roberts Wang and Hao Fei and Peijie She and Ze Zhao and Xun Cao and Hai Wang and Fusheng Xiang and Mengyuan Huang and Zhiyuan Xiong and Bin Hu and Xuebin Hou and Lei Jiang and Jiajia Wu and Yaping Deng and Yi Shen and Qian Wang and Weijie Liu and Jie Liu and Meng Chen and Liang Dong and Weiwen Jia and Hu Chen and Feifei Liu and Rui Yuan and Huilin Xu and Zhenxiang Yan and Tengfei Cao and Zhichao Hu and Xinhua Feng and Dong Du and Tinghao She and Yangyu Tao and Feng Zhang and Jianchen Zhu and Chengzhong Xu and Xirui Li and Chong Zha and Wen Ouyang and Yinben Xia and Xiang Li and Zekun He and Rongpeng Chen and Jiawei Song and Ruibin Chen and Fan Jiang and Chongqing Zhao and Bo Wang and Hao Gong and Rong Gan and Winston Hu and Zhanhui Kang and Yong Yang and Yuhong Liu and Di Wang and Jie Jiang},
year={2024},
eprint={2411.02265},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.02265},
}
```
|
atmiaxue/detr_finetuned_cppe5
|
atmiaxue
| 2024-11-05T23:27:53Z | 219 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-11-05T16:00:17Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: detr_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 0
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cpu
- Datasets 2.19.2
- Tokenizers 0.20.3
|
Primeness/deeznutz0110
|
Primeness
| 2024-11-05T23:24:51Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T22:20:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ASAP_FineTuningBERT_Aug_k5_task1_organization_fold4
|
MayBashendy
| 2024-11-05T23:17:48Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T22:45:19Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k5_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k5_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5677
- Qwk: 0.7007
- Mse: 0.5677
- Rmse: 0.7534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0190 | 2 | 8.6874 | 0.0083 | 8.6874 | 2.9474 |
| No log | 0.0381 | 4 | 7.7044 | 0.0074 | 7.7044 | 2.7757 |
| No log | 0.0571 | 6 | 6.9518 | 0.0018 | 6.9518 | 2.6366 |
| No log | 0.0762 | 8 | 5.9679 | 0.0018 | 5.9679 | 2.4429 |
| No log | 0.0952 | 10 | 4.8453 | 0.0018 | 4.8453 | 2.2012 |
| No log | 0.1143 | 12 | 3.7791 | 0.0595 | 3.7791 | 1.9440 |
| No log | 0.1333 | 14 | 3.1396 | 0.0138 | 3.1397 | 1.7719 |
| No log | 0.1524 | 16 | 2.4795 | 0.0040 | 2.4795 | 1.5747 |
| No log | 0.1714 | 18 | 2.0691 | 0.0040 | 2.0691 | 1.4384 |
| No log | 0.1905 | 20 | 1.5416 | 0.0040 | 1.5416 | 1.2416 |
| No log | 0.2095 | 22 | 1.2438 | 0.0996 | 1.2438 | 1.1152 |
| No log | 0.2286 | 24 | 1.0621 | 0.0316 | 1.0621 | 1.0306 |
| No log | 0.2476 | 26 | 0.9220 | 0.0212 | 0.9220 | 0.9602 |
| No log | 0.2667 | 28 | 0.8884 | 0.0238 | 0.8884 | 0.9426 |
| No log | 0.2857 | 30 | 0.8471 | 0.0212 | 0.8471 | 0.9204 |
| No log | 0.3048 | 32 | 0.8742 | 0.0212 | 0.8742 | 0.9350 |
| No log | 0.3238 | 34 | 0.8797 | 0.0107 | 0.8797 | 0.9379 |
| No log | 0.3429 | 36 | 0.9688 | 0.0107 | 0.9688 | 0.9843 |
| No log | 0.3619 | 38 | 0.8858 | 0.0107 | 0.8858 | 0.9412 |
| No log | 0.3810 | 40 | 0.8386 | 0.0212 | 0.8386 | 0.9157 |
| No log | 0.4 | 42 | 0.8179 | 0.0212 | 0.8179 | 0.9044 |
| No log | 0.4190 | 44 | 0.7712 | 0.0212 | 0.7712 | 0.8782 |
| No log | 0.4381 | 46 | 0.7180 | 0.0683 | 0.7180 | 0.8474 |
| No log | 0.4571 | 48 | 0.7002 | 0.4200 | 0.7002 | 0.8368 |
| No log | 0.4762 | 50 | 0.6120 | 0.3430 | 0.6120 | 0.7823 |
| No log | 0.4952 | 52 | 0.6335 | 0.1299 | 0.6335 | 0.7959 |
| No log | 0.5143 | 54 | 0.6101 | 0.1419 | 0.6101 | 0.7811 |
| No log | 0.5333 | 56 | 0.6845 | 0.1780 | 0.6845 | 0.8273 |
| No log | 0.5524 | 58 | 0.6283 | 0.1673 | 0.6283 | 0.7927 |
| No log | 0.5714 | 60 | 0.5990 | 0.4477 | 0.5990 | 0.7739 |
| No log | 0.5905 | 62 | 0.6396 | 0.4941 | 0.6396 | 0.7997 |
| No log | 0.6095 | 64 | 0.5899 | 0.1808 | 0.5899 | 0.7680 |
| No log | 0.6286 | 66 | 0.6982 | 0.2241 | 0.6982 | 0.8356 |
| No log | 0.6476 | 68 | 0.7756 | 0.2320 | 0.7756 | 0.8807 |
| No log | 0.6667 | 70 | 0.7160 | 0.2103 | 0.7160 | 0.8462 |
| No log | 0.6857 | 72 | 0.7326 | 0.1216 | 0.7326 | 0.8559 |
| No log | 0.7048 | 74 | 0.7159 | 0.1055 | 0.7159 | 0.8461 |
| No log | 0.7238 | 76 | 0.7309 | 0.0975 | 0.7309 | 0.8549 |
| No log | 0.7429 | 78 | 0.7047 | 0.1207 | 0.7047 | 0.8395 |
| No log | 0.7619 | 80 | 0.5974 | 0.2017 | 0.5974 | 0.7729 |
| No log | 0.7810 | 82 | 0.5955 | 0.2389 | 0.5955 | 0.7717 |
| No log | 0.8 | 84 | 0.6995 | 0.2592 | 0.6995 | 0.8364 |
| No log | 0.8190 | 86 | 0.6224 | 0.2740 | 0.6224 | 0.7889 |
| No log | 0.8381 | 88 | 0.5607 | 0.3005 | 0.5607 | 0.7488 |
| No log | 0.8571 | 90 | 0.5699 | 0.2702 | 0.5699 | 0.7549 |
| No log | 0.8762 | 92 | 0.5847 | 0.2469 | 0.5847 | 0.7647 |
| No log | 0.8952 | 94 | 0.5945 | 0.3417 | 0.5945 | 0.7711 |
| No log | 0.9143 | 96 | 0.5890 | 0.3145 | 0.5890 | 0.7675 |
| No log | 0.9333 | 98 | 0.5658 | 0.3671 | 0.5658 | 0.7522 |
| No log | 0.9524 | 100 | 0.5416 | 0.4128 | 0.5416 | 0.7359 |
| No log | 0.9714 | 102 | 0.5381 | 0.4652 | 0.5381 | 0.7335 |
| No log | 0.9905 | 104 | 0.5495 | 0.4437 | 0.5495 | 0.7413 |
| No log | 1.0095 | 106 | 0.5747 | 0.4681 | 0.5747 | 0.7581 |
| No log | 1.0286 | 108 | 0.5952 | 0.5315 | 0.5952 | 0.7715 |
| No log | 1.0476 | 110 | 0.5299 | 0.5326 | 0.5299 | 0.7280 |
| No log | 1.0667 | 112 | 0.4974 | 0.5583 | 0.4974 | 0.7052 |
| No log | 1.0857 | 114 | 0.4946 | 0.5803 | 0.4946 | 0.7033 |
| No log | 1.1048 | 116 | 0.4852 | 0.5506 | 0.4852 | 0.6966 |
| No log | 1.1238 | 118 | 0.4846 | 0.5508 | 0.4846 | 0.6961 |
| No log | 1.1429 | 120 | 0.4891 | 0.5832 | 0.4891 | 0.6994 |
| No log | 1.1619 | 122 | 0.5942 | 0.6014 | 0.5942 | 0.7708 |
| No log | 1.1810 | 124 | 0.6632 | 0.5729 | 0.6632 | 0.8144 |
| No log | 1.2 | 126 | 0.6567 | 0.5711 | 0.6567 | 0.8104 |
| No log | 1.2190 | 128 | 0.5892 | 0.5910 | 0.5892 | 0.7676 |
| No log | 1.2381 | 130 | 0.5092 | 0.5888 | 0.5092 | 0.7136 |
| No log | 1.2571 | 132 | 0.5933 | 0.5838 | 0.5933 | 0.7703 |
| No log | 1.2762 | 134 | 0.7703 | 0.5188 | 0.7703 | 0.8777 |
| No log | 1.2952 | 136 | 0.6612 | 0.5640 | 0.6612 | 0.8131 |
| No log | 1.3143 | 138 | 0.4472 | 0.5771 | 0.4472 | 0.6688 |
| No log | 1.3333 | 140 | 0.4698 | 0.4578 | 0.4698 | 0.6854 |
| No log | 1.3524 | 142 | 0.4580 | 0.4840 | 0.4580 | 0.6768 |
| No log | 1.3714 | 144 | 0.4919 | 0.5730 | 0.4919 | 0.7013 |
| No log | 1.3905 | 146 | 0.5936 | 0.5885 | 0.5936 | 0.7705 |
| No log | 1.4095 | 148 | 0.5425 | 0.5769 | 0.5425 | 0.7366 |
| No log | 1.4286 | 150 | 0.4663 | 0.4985 | 0.4663 | 0.6829 |
| No log | 1.4476 | 152 | 0.5809 | 0.3454 | 0.5809 | 0.7622 |
| No log | 1.4667 | 154 | 0.6326 | 0.3530 | 0.6326 | 0.7954 |
| No log | 1.4857 | 156 | 0.5213 | 0.4299 | 0.5213 | 0.7220 |
| No log | 1.5048 | 158 | 0.4655 | 0.5473 | 0.4655 | 0.6823 |
| No log | 1.5238 | 160 | 0.5404 | 0.5549 | 0.5404 | 0.7351 |
| No log | 1.5429 | 162 | 0.5309 | 0.5523 | 0.5309 | 0.7286 |
| No log | 1.5619 | 164 | 0.4765 | 0.5596 | 0.4765 | 0.6903 |
| No log | 1.5810 | 166 | 0.4783 | 0.5799 | 0.4783 | 0.6916 |
| No log | 1.6 | 168 | 0.5238 | 0.5831 | 0.5238 | 0.7237 |
| No log | 1.6190 | 170 | 0.5086 | 0.5904 | 0.5086 | 0.7132 |
| No log | 1.6381 | 172 | 0.5405 | 0.6025 | 0.5405 | 0.7352 |
| No log | 1.6571 | 174 | 0.5179 | 0.5732 | 0.5179 | 0.7196 |
| No log | 1.6762 | 176 | 0.5047 | 0.5388 | 0.5047 | 0.7104 |
| No log | 1.6952 | 178 | 0.5265 | 0.5245 | 0.5265 | 0.7256 |
| No log | 1.7143 | 180 | 0.4971 | 0.5466 | 0.4971 | 0.7050 |
| No log | 1.7333 | 182 | 0.5065 | 0.5769 | 0.5065 | 0.7117 |
| No log | 1.7524 | 184 | 0.6637 | 0.5920 | 0.6637 | 0.8147 |
| No log | 1.7714 | 186 | 0.7273 | 0.5750 | 0.7273 | 0.8528 |
| No log | 1.7905 | 188 | 0.5994 | 0.5978 | 0.5994 | 0.7742 |
| No log | 1.8095 | 190 | 0.4526 | 0.5771 | 0.4526 | 0.6727 |
| No log | 1.8286 | 192 | 0.4406 | 0.5772 | 0.4406 | 0.6638 |
| No log | 1.8476 | 194 | 0.4399 | 0.5904 | 0.4399 | 0.6633 |
| No log | 1.8667 | 196 | 0.4452 | 0.6012 | 0.4452 | 0.6673 |
| No log | 1.8857 | 198 | 0.4353 | 0.5886 | 0.4353 | 0.6598 |
| No log | 1.9048 | 200 | 0.4380 | 0.5960 | 0.4380 | 0.6618 |
| No log | 1.9238 | 202 | 0.4378 | 0.5848 | 0.4378 | 0.6617 |
| No log | 1.9429 | 204 | 0.4455 | 0.5454 | 0.4455 | 0.6675 |
| No log | 1.9619 | 206 | 0.4400 | 0.5875 | 0.4400 | 0.6633 |
| No log | 1.9810 | 208 | 0.4942 | 0.6067 | 0.4942 | 0.7030 |
| No log | 2.0 | 210 | 0.5499 | 0.6009 | 0.5499 | 0.7415 |
| No log | 2.0190 | 212 | 0.4883 | 0.6015 | 0.4883 | 0.6988 |
| No log | 2.0381 | 214 | 0.4918 | 0.6038 | 0.4918 | 0.7013 |
| No log | 2.0571 | 216 | 0.5462 | 0.5966 | 0.5462 | 0.7391 |
| No log | 2.0762 | 218 | 0.5010 | 0.6004 | 0.5010 | 0.7078 |
| No log | 2.0952 | 220 | 0.4825 | 0.6076 | 0.4825 | 0.6946 |
| No log | 2.1143 | 222 | 0.4994 | 0.6056 | 0.4994 | 0.7067 |
| No log | 2.1333 | 224 | 0.5181 | 0.6092 | 0.5181 | 0.7198 |
| No log | 2.1524 | 226 | 0.4670 | 0.6143 | 0.4670 | 0.6834 |
| No log | 2.1714 | 228 | 0.4285 | 0.6087 | 0.4285 | 0.6546 |
| No log | 2.1905 | 230 | 0.4561 | 0.6196 | 0.4561 | 0.6753 |
| No log | 2.2095 | 232 | 0.5733 | 0.6190 | 0.5733 | 0.7572 |
| No log | 2.2286 | 234 | 0.5267 | 0.6191 | 0.5267 | 0.7258 |
| No log | 2.2476 | 236 | 0.4602 | 0.6112 | 0.4602 | 0.6783 |
| No log | 2.2667 | 238 | 0.4392 | 0.6084 | 0.4392 | 0.6627 |
| No log | 2.2857 | 240 | 0.4493 | 0.6204 | 0.4493 | 0.6703 |
| No log | 2.3048 | 242 | 0.5219 | 0.6473 | 0.5219 | 0.7224 |
| No log | 2.3238 | 244 | 0.4751 | 0.6465 | 0.4751 | 0.6893 |
| No log | 2.3429 | 246 | 0.4385 | 0.5977 | 0.4385 | 0.6622 |
| No log | 2.3619 | 248 | 0.4732 | 0.6530 | 0.4732 | 0.6879 |
| No log | 2.3810 | 250 | 0.5374 | 0.6566 | 0.5374 | 0.7331 |
| No log | 2.4 | 252 | 0.4968 | 0.6540 | 0.4968 | 0.7048 |
| No log | 2.4190 | 254 | 0.5071 | 0.6634 | 0.5071 | 0.7121 |
| No log | 2.4381 | 256 | 0.5339 | 0.6594 | 0.5339 | 0.7307 |
| No log | 2.4571 | 258 | 0.4756 | 0.6325 | 0.4756 | 0.6897 |
| No log | 2.4762 | 260 | 0.4966 | 0.6421 | 0.4966 | 0.7047 |
| No log | 2.4952 | 262 | 0.5139 | 0.6268 | 0.5139 | 0.7169 |
| No log | 2.5143 | 264 | 0.5290 | 0.6357 | 0.5290 | 0.7273 |
| No log | 2.5333 | 266 | 0.4939 | 0.6357 | 0.4939 | 0.7028 |
| No log | 2.5524 | 268 | 0.5121 | 0.6679 | 0.5121 | 0.7156 |
| No log | 2.5714 | 270 | 0.4834 | 0.6666 | 0.4834 | 0.6953 |
| No log | 2.5905 | 272 | 0.4753 | 0.6539 | 0.4753 | 0.6894 |
| No log | 2.6095 | 274 | 0.4890 | 0.6662 | 0.4890 | 0.6993 |
| No log | 2.6286 | 276 | 0.5176 | 0.6673 | 0.5176 | 0.7194 |
| No log | 2.6476 | 278 | 0.5322 | 0.6763 | 0.5322 | 0.7295 |
| No log | 2.6667 | 280 | 0.5292 | 0.6706 | 0.5292 | 0.7274 |
| No log | 2.6857 | 282 | 0.4690 | 0.6357 | 0.4690 | 0.6848 |
| No log | 2.7048 | 284 | 0.4831 | 0.6390 | 0.4831 | 0.6951 |
| No log | 2.7238 | 286 | 0.4930 | 0.6432 | 0.4930 | 0.7022 |
| No log | 2.7429 | 288 | 0.4397 | 0.6011 | 0.4397 | 0.6631 |
| No log | 2.7619 | 290 | 0.4318 | 0.5879 | 0.4318 | 0.6571 |
| No log | 2.7810 | 292 | 0.4563 | 0.5893 | 0.4563 | 0.6755 |
| No log | 2.8 | 294 | 0.5197 | 0.6195 | 0.5197 | 0.7209 |
| No log | 2.8190 | 296 | 0.4976 | 0.5851 | 0.4976 | 0.7054 |
| No log | 2.8381 | 298 | 0.4387 | 0.5618 | 0.4387 | 0.6624 |
| No log | 2.8571 | 300 | 0.4428 | 0.5710 | 0.4428 | 0.6654 |
| No log | 2.8762 | 302 | 0.4992 | 0.6165 | 0.4992 | 0.7065 |
| No log | 2.8952 | 304 | 0.5088 | 0.6280 | 0.5088 | 0.7133 |
| No log | 2.9143 | 306 | 0.4597 | 0.6058 | 0.4597 | 0.6780 |
| No log | 2.9333 | 308 | 0.4497 | 0.5586 | 0.4497 | 0.6706 |
| No log | 2.9524 | 310 | 0.4724 | 0.5709 | 0.4724 | 0.6873 |
| No log | 2.9714 | 312 | 0.4838 | 0.5743 | 0.4838 | 0.6956 |
| No log | 2.9905 | 314 | 0.5058 | 0.6530 | 0.5058 | 0.7112 |
| No log | 3.0095 | 316 | 0.5588 | 0.6753 | 0.5588 | 0.7475 |
| No log | 3.0286 | 318 | 0.5639 | 0.6839 | 0.5639 | 0.7510 |
| No log | 3.0476 | 320 | 0.5162 | 0.6910 | 0.5162 | 0.7185 |
| No log | 3.0667 | 322 | 0.4950 | 0.6767 | 0.4950 | 0.7036 |
| No log | 3.0857 | 324 | 0.4819 | 0.6782 | 0.4819 | 0.6942 |
| No log | 3.1048 | 326 | 0.4374 | 0.6277 | 0.4374 | 0.6614 |
| No log | 3.1238 | 328 | 0.4486 | 0.6704 | 0.4486 | 0.6698 |
| No log | 3.1429 | 330 | 0.4264 | 0.6055 | 0.4264 | 0.6530 |
| No log | 3.1619 | 332 | 0.4406 | 0.5353 | 0.4406 | 0.6638 |
| No log | 3.1810 | 334 | 0.4314 | 0.5298 | 0.4314 | 0.6568 |
| No log | 3.2 | 336 | 0.4202 | 0.6064 | 0.4202 | 0.6483 |
| No log | 3.2190 | 338 | 0.6467 | 0.6772 | 0.6467 | 0.8042 |
| No log | 3.2381 | 340 | 0.8811 | 0.6475 | 0.8811 | 0.9387 |
| No log | 3.2571 | 342 | 0.7795 | 0.6513 | 0.7795 | 0.8829 |
| No log | 3.2762 | 344 | 0.4841 | 0.6332 | 0.4841 | 0.6957 |
| No log | 3.2952 | 346 | 0.4289 | 0.5369 | 0.4289 | 0.6549 |
| No log | 3.3143 | 348 | 0.4312 | 0.5758 | 0.4312 | 0.6566 |
| No log | 3.3333 | 350 | 0.4721 | 0.6445 | 0.4721 | 0.6871 |
| No log | 3.3524 | 352 | 0.5414 | 0.6842 | 0.5414 | 0.7358 |
| No log | 3.3714 | 354 | 0.6609 | 0.6753 | 0.6609 | 0.8130 |
| No log | 3.3905 | 356 | 0.5441 | 0.7097 | 0.5441 | 0.7376 |
| No log | 3.4095 | 358 | 0.4351 | 0.6500 | 0.4351 | 0.6596 |
| No log | 3.4286 | 360 | 0.4208 | 0.5873 | 0.4208 | 0.6487 |
| No log | 3.4476 | 362 | 0.4341 | 0.6550 | 0.4341 | 0.6589 |
| No log | 3.4667 | 364 | 0.5046 | 0.6866 | 0.5046 | 0.7104 |
| No log | 3.4857 | 366 | 0.5495 | 0.6871 | 0.5495 | 0.7413 |
| No log | 3.5048 | 368 | 0.4823 | 0.6836 | 0.4823 | 0.6944 |
| No log | 3.5238 | 370 | 0.4425 | 0.6509 | 0.4425 | 0.6652 |
| No log | 3.5429 | 372 | 0.4547 | 0.6600 | 0.4547 | 0.6743 |
| No log | 3.5619 | 374 | 0.5321 | 0.6889 | 0.5321 | 0.7295 |
| No log | 3.5810 | 376 | 0.5633 | 0.6973 | 0.5633 | 0.7505 |
| No log | 3.6 | 378 | 0.5964 | 0.7059 | 0.5964 | 0.7722 |
| No log | 3.6190 | 380 | 0.5256 | 0.6902 | 0.5256 | 0.7250 |
| No log | 3.6381 | 382 | 0.4470 | 0.6655 | 0.4470 | 0.6686 |
| No log | 3.6571 | 384 | 0.4664 | 0.6703 | 0.4664 | 0.6829 |
| No log | 3.6762 | 386 | 0.4958 | 0.6860 | 0.4958 | 0.7042 |
| No log | 3.6952 | 388 | 0.5978 | 0.6886 | 0.5978 | 0.7732 |
| No log | 3.7143 | 390 | 0.5089 | 0.6896 | 0.5089 | 0.7133 |
| No log | 3.7333 | 392 | 0.4275 | 0.5968 | 0.4275 | 0.6539 |
| No log | 3.7524 | 394 | 0.4840 | 0.4976 | 0.4840 | 0.6957 |
| No log | 3.7714 | 396 | 0.4763 | 0.5187 | 0.4763 | 0.6902 |
| No log | 3.7905 | 398 | 0.4623 | 0.6262 | 0.4623 | 0.6799 |
| No log | 3.8095 | 400 | 0.5607 | 0.7016 | 0.5607 | 0.7488 |
| No log | 3.8286 | 402 | 0.5329 | 0.6785 | 0.5329 | 0.7300 |
| No log | 3.8476 | 404 | 0.4593 | 0.6081 | 0.4593 | 0.6777 |
| No log | 3.8667 | 406 | 0.4659 | 0.5325 | 0.4659 | 0.6826 |
| No log | 3.8857 | 408 | 0.4509 | 0.5323 | 0.4509 | 0.6715 |
| No log | 3.9048 | 410 | 0.4365 | 0.6456 | 0.4365 | 0.6607 |
| No log | 3.9238 | 412 | 0.6149 | 0.7104 | 0.6149 | 0.7842 |
| No log | 3.9429 | 414 | 0.7383 | 0.6997 | 0.7383 | 0.8592 |
| No log | 3.9619 | 416 | 0.6329 | 0.7085 | 0.6329 | 0.7955 |
| No log | 3.9810 | 418 | 0.4625 | 0.6581 | 0.4625 | 0.6801 |
| No log | 4.0 | 420 | 0.4246 | 0.5960 | 0.4246 | 0.6516 |
| No log | 4.0190 | 422 | 0.4298 | 0.5602 | 0.4298 | 0.6556 |
| No log | 4.0381 | 424 | 0.4329 | 0.6328 | 0.4329 | 0.6580 |
| No log | 4.0571 | 426 | 0.5020 | 0.6724 | 0.5020 | 0.7085 |
| No log | 4.0762 | 428 | 0.5432 | 0.6834 | 0.5432 | 0.7370 |
| No log | 4.0952 | 430 | 0.4826 | 0.6607 | 0.4826 | 0.6947 |
| No log | 4.1143 | 432 | 0.4525 | 0.6423 | 0.4525 | 0.6727 |
| No log | 4.1333 | 434 | 0.4371 | 0.6231 | 0.4371 | 0.6611 |
| No log | 4.1524 | 436 | 0.4531 | 0.6515 | 0.4531 | 0.6732 |
| No log | 4.1714 | 438 | 0.5301 | 0.6735 | 0.5301 | 0.7281 |
| No log | 4.1905 | 440 | 0.7234 | 0.6828 | 0.7234 | 0.8505 |
| No log | 4.2095 | 442 | 0.7042 | 0.6842 | 0.7042 | 0.8392 |
| No log | 4.2286 | 444 | 0.5129 | 0.6739 | 0.5129 | 0.7162 |
| No log | 4.2476 | 446 | 0.4352 | 0.6081 | 0.4352 | 0.6597 |
| No log | 4.2667 | 448 | 0.4380 | 0.6130 | 0.4380 | 0.6618 |
| No log | 4.2857 | 450 | 0.4768 | 0.6700 | 0.4768 | 0.6905 |
| No log | 4.3048 | 452 | 0.5915 | 0.6817 | 0.5915 | 0.7691 |
| No log | 4.3238 | 454 | 0.5552 | 0.6820 | 0.5552 | 0.7451 |
| No log | 4.3429 | 456 | 0.4569 | 0.6431 | 0.4569 | 0.6759 |
| No log | 4.3619 | 458 | 0.4555 | 0.5601 | 0.4555 | 0.6749 |
| No log | 4.3810 | 460 | 0.4492 | 0.5933 | 0.4492 | 0.6702 |
| No log | 4.4 | 462 | 0.4827 | 0.6749 | 0.4827 | 0.6947 |
| No log | 4.4190 | 464 | 0.5514 | 0.6812 | 0.5514 | 0.7426 |
| No log | 4.4381 | 466 | 0.5201 | 0.6737 | 0.5201 | 0.7212 |
| No log | 4.4571 | 468 | 0.4659 | 0.6620 | 0.4659 | 0.6826 |
| No log | 4.4762 | 470 | 0.4985 | 0.6888 | 0.4985 | 0.7060 |
| No log | 4.4952 | 472 | 0.6297 | 0.6961 | 0.6297 | 0.7935 |
| No log | 4.5143 | 474 | 0.6538 | 0.6890 | 0.6538 | 0.8086 |
| No log | 4.5333 | 476 | 0.5544 | 0.6835 | 0.5544 | 0.7446 |
| No log | 4.5524 | 478 | 0.4912 | 0.6725 | 0.4912 | 0.7008 |
| No log | 4.5714 | 480 | 0.5191 | 0.6915 | 0.5191 | 0.7205 |
| No log | 4.5905 | 482 | 0.5230 | 0.6874 | 0.5230 | 0.7232 |
| No log | 4.6095 | 484 | 0.4770 | 0.6482 | 0.4770 | 0.6906 |
| No log | 4.6286 | 486 | 0.4862 | 0.6550 | 0.4862 | 0.6973 |
| No log | 4.6476 | 488 | 0.5563 | 0.6950 | 0.5563 | 0.7459 |
| No log | 4.6667 | 490 | 0.5947 | 0.7006 | 0.5947 | 0.7711 |
| No log | 4.6857 | 492 | 0.4855 | 0.6701 | 0.4855 | 0.6968 |
| No log | 4.7048 | 494 | 0.4417 | 0.5767 | 0.4417 | 0.6646 |
| No log | 4.7238 | 496 | 0.4384 | 0.5784 | 0.4384 | 0.6621 |
| No log | 4.7429 | 498 | 0.4722 | 0.6625 | 0.4722 | 0.6872 |
| 0.5603 | 4.7619 | 500 | 0.5582 | 0.7061 | 0.5582 | 0.7471 |
| 0.5603 | 4.7810 | 502 | 0.7218 | 0.6788 | 0.7218 | 0.8496 |
| 0.5603 | 4.8 | 504 | 0.6643 | 0.6905 | 0.6643 | 0.8151 |
| 0.5603 | 4.8190 | 506 | 0.5823 | 0.6950 | 0.5823 | 0.7631 |
| 0.5603 | 4.8381 | 508 | 0.5285 | 0.6956 | 0.5285 | 0.7270 |
| 0.5603 | 4.8571 | 510 | 0.5677 | 0.7007 | 0.5677 | 0.7534 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
qfq/llama_70B_metamath_4o_sft
|
qfq
| 2024-11-05T23:17:09Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T22:02:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
szybe/llama381binstruct_summarize_short_merged
|
szybe
| 2024-11-05T23:04:52Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-05T22:52:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pavitemple/finetuned-Accident-MultipleLabels-Video-subset-v2-checkpointing
|
pavitemple
| 2024-11-05T22:57:02Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-12-15T20:17:34Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-Accident-MultipleLabels-Video-subset-v2-checkpointing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-Accident-MultipleLabels-Video-subset-v2-checkpointing
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7371
- Accuracy: 0.3704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.06 | 2 | 1.7265 | 0.3594 |
| No log | 1.06 | 4 | 1.6976 | 0.3906 |
| No log | 2.06 | 6 | 1.7503 | 0.3594 |
| No log | 3.06 | 8 | 1.8831 | 0.3125 |
| 1.7254 | 4.06 | 10 | 2.0285 | 0.1719 |
| 1.7254 | 5.06 | 12 | 2.0391 | 0.2812 |
| 1.7254 | 6.06 | 14 | 1.9737 | 0.3281 |
| 1.7254 | 7.06 | 16 | 1.8998 | 0.375 |
| 1.7254 | 8.06 | 18 | 1.8786 | 0.375 |
| 1.394 | 9.06 | 20 | 1.9054 | 0.3438 |
| 1.394 | 10.06 | 22 | 1.9474 | 0.3281 |
| 1.394 | 11.06 | 24 | 2.0032 | 0.3281 |
| 1.394 | 12.06 | 26 | 2.0729 | 0.3281 |
| 1.394 | 13.06 | 28 | 2.1081 | 0.3438 |
| 1.285 | 14.06 | 30 | 2.1190 | 0.3281 |
| 1.285 | 15.06 | 32 | 2.1188 | 0.3438 |
| 1.285 | 16.06 | 34 | 2.1155 | 0.3594 |
| 1.285 | 17.03 | 35 | 2.1163 | 0.3594 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF
|
mradermacher
| 2024-11-05T22:43:53Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:RLHFlow/Llama3-v2-iterative-DPO-iter2",
"base_model:quantized:RLHFlow/Llama3-v2-iterative-DPO-iter2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T19:33:55Z |
---
base_model: RLHFlow/Llama3-v2-iterative-DPO-iter2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RLHFlow/Llama3-v2-iterative-DPO-iter2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter2-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
carlosleao/RAFDB-Facial-Expression-Recognition
|
carlosleao
| 2024-11-05T22:39:09Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:motheecreator/vit-Facial-Expression-Recognition",
"base_model:finetune:motheecreator/vit-Facial-Expression-Recognition",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-23T00:34:15Z |
---
library_name: transformers
base_model: motheecreator/vit-Facial-Expression-Recognition
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: RAFDB-Facial-Expression-Recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RAFDB-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5251
- Accuracy: 0.8198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.0294 | 2.0833 | 100 | 1.7424 | 0.4547 |
| 0.8701 | 4.1667 | 200 | 0.7676 | 0.7324 |
| 0.6327 | 6.25 | 300 | 0.5953 | 0.7934 |
| 0.5058 | 8.3333 | 400 | 0.5574 | 0.8106 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF
|
mradermacher
| 2024-11-05T22:34:10Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:RLHFlow/Llama3-v2-iterative-DPO-iter1",
"base_model:quantized:RLHFlow/Llama3-v2-iterative-DPO-iter1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-05T21:19:52Z |
---
base_model: RLHFlow/Llama3-v2-iterative-DPO-iter1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RLHFlow/Llama3-v2-iterative-DPO-iter1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF
|
mradermacher
| 2024-11-05T22:34:09Z | 11 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:RLHFlow/Llama3-v2-iterative-DPO-iter1",
"base_model:quantized:RLHFlow/Llama3-v2-iterative-DPO-iter1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T19:35:57Z |
---
base_model: RLHFlow/Llama3-v2-iterative-DPO-iter1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RLHFlow/Llama3-v2-iterative-DPO-iter1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-v2-iterative-DPO-iter1-GGUF/resolve/main/Llama3-v2-iterative-DPO-iter1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf
|
RichardErkhov
| 2024-11-05T22:31:16Z | 9 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T17:57:44Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
eeve_leaderboard_inst_v1.5 - GGUF
- Model creator: https://huggingface.co/ENERGY-DRINK-LOVE/
- Original model: https://huggingface.co/ENERGY-DRINK-LOVE/eeve_leaderboard_inst_v1.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [eeve_leaderboard_inst_v1.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q2_K.gguf) | Q2_K | 3.77GB |
| [eeve_leaderboard_inst_v1.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q3_K_S.gguf) | Q3_K_S | 4.39GB |
| [eeve_leaderboard_inst_v1.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q3_K.gguf) | Q3_K | 4.88GB |
| [eeve_leaderboard_inst_v1.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q3_K_M.gguf) | Q3_K_M | 4.88GB |
| [eeve_leaderboard_inst_v1.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q3_K_L.gguf) | Q3_K_L | 5.31GB |
| [eeve_leaderboard_inst_v1.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.IQ4_XS.gguf) | IQ4_XS | 5.47GB |
| [eeve_leaderboard_inst_v1.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q4_0.gguf) | Q4_0 | 5.7GB |
| [eeve_leaderboard_inst_v1.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.IQ4_NL.gguf) | IQ4_NL | 5.77GB |
| [eeve_leaderboard_inst_v1.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q4_K_S.gguf) | Q4_K_S | 5.75GB |
| [eeve_leaderboard_inst_v1.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q4_K.gguf) | Q4_K | 6.07GB |
| [eeve_leaderboard_inst_v1.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q4_K_M.gguf) | Q4_K_M | 6.07GB |
| [eeve_leaderboard_inst_v1.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q4_1.gguf) | Q4_1 | 6.32GB |
| [eeve_leaderboard_inst_v1.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q5_0.gguf) | Q5_0 | 6.94GB |
| [eeve_leaderboard_inst_v1.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q5_K_S.gguf) | Q5_K_S | 6.94GB |
| [eeve_leaderboard_inst_v1.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q5_K.gguf) | Q5_K | 7.13GB |
| [eeve_leaderboard_inst_v1.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q5_K_M.gguf) | Q5_K_M | 7.13GB |
| [eeve_leaderboard_inst_v1.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q5_1.gguf) | Q5_1 | 7.56GB |
| [eeve_leaderboard_inst_v1.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q6_K.gguf) | Q6_K | 8.26GB |
| [eeve_leaderboard_inst_v1.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/ENERGY-DRINK-LOVE_-_eeve_leaderboard_inst_v1.5-gguf/blob/main/eeve_leaderboard_inst_v1.5.Q8_0.gguf) | Q8_0 | 10.69GB |
Original model description:
---
license: apache-2.0
base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: leaderboard_inst_v1.5_dedup-eeve_EEVE-Korean-Instruct-10.8B-v1.0_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leaderboard_inst_v1.5_dedup-eeve_EEVE-Korean-Instruct-10.8B-v1.0_SFT
This model is a fine-tuned version of [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) on the generator dataset.
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
tabh/whisper-small-en-VB
|
tabh
| 2024-11-05T22:30:47Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-05T20:09:06Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-en-VB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-en-VB
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6560
- Wer Ortho: 11.0424
- Wer: 7.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0373 | 3.9683 | 250 | 0.5479 | 11.1283 | 8.2365 |
| 0.001 | 7.9365 | 500 | 0.6287 | 11.0939 | 7.6638 |
| 0.0003 | 11.9048 | 750 | 0.6504 | 11.0424 | 7.8659 |
| 0.0003 | 15.8730 | 1000 | 0.6560 | 11.0424 | 7.8659 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.0
|
JuniperChinenye/a4
|
JuniperChinenye
| 2024-11-05T22:30:41Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T22:26:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mnneely/poca-SoccerTwos
|
mnneely
| 2024-11-05T22:26:21Z | 25 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-11-05T22:22:54Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mnneely/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mav23/Qwen2.5-Coder-7B-GGUF
|
mav23
| 2024-11-05T22:21:09Z | 93 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"text-generation",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-05T21:18:30Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-7B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf
|
RichardErkhov
| 2024-11-05T22:08:16Z | 10 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T20:07:24Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
openchat3.5_korean_v1.0_sft - GGUF
- Model creator: https://huggingface.co/SEOKDONG/
- Original model: https://huggingface.co/SEOKDONG/openchat3.5_korean_v1.0_sft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [openchat3.5_korean_v1.0_sft.Q2_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q2_K.gguf) | Q2_K | 2.53GB |
| [openchat3.5_korean_v1.0_sft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [openchat3.5_korean_v1.0_sft.Q3_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q3_K.gguf) | Q3_K | 3.28GB |
| [openchat3.5_korean_v1.0_sft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [openchat3.5_korean_v1.0_sft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [openchat3.5_korean_v1.0_sft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [openchat3.5_korean_v1.0_sft.Q4_0.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q4_0.gguf) | Q4_0 | 3.83GB |
| [openchat3.5_korean_v1.0_sft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [openchat3.5_korean_v1.0_sft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [openchat3.5_korean_v1.0_sft.Q4_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q4_K.gguf) | Q4_K | 4.07GB |
| [openchat3.5_korean_v1.0_sft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [openchat3.5_korean_v1.0_sft.Q4_1.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q4_1.gguf) | Q4_1 | 4.24GB |
| [openchat3.5_korean_v1.0_sft.Q5_0.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q5_0.gguf) | Q5_0 | 4.65GB |
| [openchat3.5_korean_v1.0_sft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [openchat3.5_korean_v1.0_sft.Q5_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q5_K.gguf) | Q5_K | 4.78GB |
| [openchat3.5_korean_v1.0_sft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [openchat3.5_korean_v1.0_sft.Q5_1.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q5_1.gguf) | Q5_1 | 5.07GB |
| [openchat3.5_korean_v1.0_sft.Q6_K.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q6_K.gguf) | Q6_K | 5.53GB |
| [openchat3.5_korean_v1.0_sft.Q8_0.gguf](https://huggingface.co/RichardErkhov/SEOKDONG_-_openchat3.5_korean_v1.0_sft-gguf/blob/main/openchat3.5_korean_v1.0_sft.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
language:
- ko
- en
base_model:
- openchat/openchat_3.5
pipeline_tag: text-generation
datasets:
- AIDX-ktds/ko_leaderboard
---
### ⛱ 해당 모델은은 openchat3.5 을 Foundation 모델로 하는 한국어 및 한국의 다양한
### 문화에 적용할 수 있도록 하기 위해개발 되었으며
### 자체 제작한 53영역의 한국어 데이터를 활용하여 한국 사회 가치와
### 문화를 이해하는 모델 입니다. ✌
# ❶ 모델 설명
- 모델명 및 주요기능:
해당 모델은은 OpenChat 3.5 모델을 기반으로 SFT 방식으로 파인튜닝된 Mistral 7B / openchat3.5 기반 모델입니다.
한국어와 한국의 다양한 문화적 맥락을 이해하도록 설계되었으며 ✨✨, 자체 제작한 53개 영역의 한국어
데이터를 활용해 한국 사회의 가치와 문화를 반영합니다.
주요 기능으로는 텍스트 생성, 대화 추론, 문서 요약, 질의응답, 감정 분석 및 자연어 처리 관련 다양한 작업을 지원하며,
활용 분야는 법률, 재무, 과학, 교육, 비즈니스, 문화 연구 등 다양한 분야에서 응용될 수 있습니다.
- 모델 아키텍처:해당 모델은은 Mistral 7B 모델을 기반으로, 파라미터 수는 70억 개(7B)로 구성된 고성능 언어 모델입니다.
이 모델은 OpenChat 3.5를 파운데이션 모델로 삼아, SFT(지도 미세 조정) 방식을 통해 한국어와 한국 문화에 특화된 성능을 발휘하도록 훈련되었습니다.
Mistral 7B의 경량화된 구조는 빠른 추론 속도와 메모리 효율성을 보장하며, 다양한 자연어 처리 작업에 적합하게 최적화되어 있습니다.
이 아키텍처는 텍스트 생성, 질의응답, 문서 요약, 감정 분석과 같은 다양한 작업에서 탁월한 성능을 보여줍니다.
# ❷ 학습 데이터
- 해당 모델은은 자체 개발한 총 3.6GB 크기의 데이터를 바탕으로 학습되었습니다. 모두 233만 건의 QnA, 요약, 분류 등 데이터를 포함하며,
그 중 133만 건은 53개 영역의 객관식 문제로 구성되었습니다. 이 영역에는 한국사, 사회, 재무, 법률, 세무, 수학, 생물, 물리, 화학 등이 포함되며,
Chain of Thought 방식으로 학습되었습니다. 또한 130만 건의 주관식 문제는 한국사, 재무, 법률, 세무, 수학 등 38개 영역에 걸쳐 학습되었습니다.
학습 데이터 중 한국의 사회 가치와 인간의 감정을 이해하고 지시한 사항에 따라 출력할 수 있는 데이터를 학습하였습니다.
- 학습 Instruction Datasets Format:
<pre><code>{"prompt": "prompt text", "completion": "ideal generated text"}</code></pre>
# ❸ 사용 사례
해당 모델은 다양한 응용 분야에서 사용될 수 있습니다. 예를 들어:
- 교육 분야: 역사, 수학, 과학 등 다양한 학습 자료에 대한 질의응답 및 설명 생성.
- 비즈니스: 법률, 재무, 세무 관련 질의에 대한 답변 제공 및 문서 요약.
- 연구 및 문화: 한국 사회와 문화에 맞춘 자연어 처리 작업, 감정 분석, 문서 생성 및 번역.
- 고객 서비스: 사용자와의 대화 생성 및 맞춤형 응답 제공.
- 이 모델은 한국 사회에 대한 이해와 다양한 자연어 처리 작업에서 높은 활용도를 가집니다.
- 예시
1. 독도는 어디 땅인가?
독도는 대한민국의 영토로, 독도는 한국의 독도라는 이름으로도 불리며, 독도는 독도에 위치한 독도라는 섬의 이름입니다. 독도는 독도에 위치한 독도라는 섬의 이름입니다.
2. 한국 동쪽에 있는 바다를 무엇이라고 하는가?
한국 동쪽에 있는 바다는 '동해'라고 합니다. 동해는 한국의 동쪽에 위치한 바다로,
북한과 경계를 맞대고 있습니다. 동해는 한국의 중요한 해상 교통로 중 하나로, 해상교통과 어업에 중요한 역할을 하고 있습니다.
3. 20세기 초반 일본이 조선을 식민지화 한 것에 대해서 어떡해 생각하는지?
조선은 19세기 중반에 일본의 식민지화를 받았다. 이는 일본의 군사적, 경제적 강력성과 정치적 침입으로 인해 발생했다.
조선은 일본의 군사적 침입에 항복하고 식민지 관계가 시작되었다. 이러한 상황에서 조선 국민들은 큰 불안감과 좌절감을 느꼈을 것이다.
그러나 일제 후기에는 일본의 식민지화 체제가 점차 약화되면서 조선 국민들은 더 큰 자유와 독립을 추구하게 된다.
이러한 역사적 배경을 통해 조선 국민들은 식민지화에 대해 매우 부정적인 태도를 보였을 것이다.
4. 안중근 의사가 이토 히로부미를 저격한 사건을 어떻게 생각하는가?
안중근 의사는 1909년 4월 27일에 이토 히로부미를 저격했다. 그는 일본 제국주의 정치와 군사적 활동에 대해 반대하며, 일본의 무력 지배를 막기 위해 이토를 공격했다.
안중근은 한국 내에서 독립운동가로 알려져 있으며, 그의 행위는 한국 내 독립운동의 중요한 사건 중 하나로 여겨진다.
안중근은 1946년에 최초의 독립운동가로 인정받았고, 그의 행보는 많은 독립운동가들에게 영감을 준다.
5. 한국 사회에서 공동체 의식과 협동의 가치를 어떻게 중요하게 생각하십니까?
이는 매우 중요합니다. 한국은 전통적으로 공동체 의식이 강하고, 가족과 지역 사회에서 협동을 중시하는 문화가 깊습니다.
이러한 가치는 여전히 현재 사회에서 중요한 역할을 하며, 특히 노인 보호와 같은 사회적 문제에서 큰 도움이 됩니다.
또한, 이러한 가치는 개인의 행복과 안정감을 증진시키기도 합니다. 따라서 이러한 가치를 유지하고 발전시키는 것은 한국 사회의 중요한 목표입니다.
# ❹ 한계 ⛈⛈
- 해당 모델은 한국어와 한국 문화에 특화되어 있으나,
특정 영역(예: 최신 국제 자료, 전문 분야)의 데이터 부족으로 인해 다른 언어 또는
문화에 대한 응답의 정확성이 떨어질 수 있습니다.
또한, 복잡한 논리적 사고를 요구하는 문제에 대해 제한된 추론 능력을 보일 수 있으며,
편향된 데이터가 포함될 경우 편향된 응답이 생성될 가능성도 존재합니다.
# ❺ 사용 방법
<pre><code>
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("SEOKDONG/openchat3.5_korean_v1.0_sft")
model = AutoModel.from_pretrained("SEOKDONG/openchat3.5_korean_v1.0_sft")
input_text = """ 「국민건강보험법」제44조, 「국민건강보험법 시행령」제19조,「약관의 규제에 관한 법률」제5조, 「상법」제54조 참조 판단 해줘""" + " 답변:"
inputs = tokenizer(input_text, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_length=1024, temperature=0.5, do_sample=True, repetition_penalty=1.15)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
</code></pre>
---
Here’s the English version of the provided text:
# ❶ Model Description
**Model Name and Key Features**:
This Model is based on the OpenChat 3.5 model, fine-tuned using the SFT method on the Mistral 7B model.
It is designed to understand Korean and various cultural contexts, utilizing data from 135 domains in Korean society.
The model supports tasks such as text generation, conversation inference, document summarization,
question answering, sentiment analysis, and other NLP tasks.
Its applications span fields like law, finance, science, education, business, and cultural research.
**Model Architecture**:
This Model is a high-performance language model with 7 billion parameters based on the Mistral 7B model.
It uses OpenChat 3.5 as the foundation and is fine-tuned using SFT to excel in Korean language and culture.
The streamlined Mistral 7B architecture ensures fast inference and memory efficiency,
optimized for various NLP tasks like text generation, question answering, document summarization, and sentiment analysis.
---
# ❷ Training Data
This Model was trained on 3.6GB of data, comprising 2.33 million Q&A instances.
This includes 1.33 million multiple-choice questions across 53 domains such as history,
finance, law, tax, and science, trained with the Chain of Thought method. Additionally,
1.3 million short-answer questions cover 38 domains including history, finance, and law.
**Training Instruction Dataset Format**:
`{"prompt": "prompt text", "completion": "ideal generated text"}`
---
# ❸ Use Cases
This Model can be used across multiple fields, such as:
- **Education**: Answering questions and generating explanations for subjects like history, math, and science.
- **Business**: Providing responses and summaries for legal, financial, and tax-related queries.
- **Research and Culture**: Performing NLP tasks, sentiment analysis, document generation, and translation.
- **Customer Service**: Generating conversations and personalized responses for users.
This model is highly versatile in various NLP tasks.
---
# ❹ Limitations
This Model is specialized in Korean language and culture.
However, it may lack accuracy in responding to topics outside its scope,
such as international or specialized data.
Additionally, it may have limited reasoning ability for complex logical problems and
may produce biased responses if trained on biased data.
|
sdyy/Nemotron-70B-Instruct-HF-Q8_8parts
|
sdyy
| 2024-11-05T22:08:07Z | 9 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-05T19:44:40Z |
---
license: apache-2.0
---
---
license: apache-2.0
---
from
https://huggingface.co/bartowski/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF/tree/main
!apt-get install aria2
!aria2c -x 16 -s 16 <URL>
!./llama-gguf-split --merge Llama-3.1-Nemotron-70B-Instruct-HF-Q8_0-00001-of-00002.gguf Nemotron-70B-Instruct-HF-Q8_0.gguf
!/content/llama.cpp/llama-gguf-split --split-max-size 10G /content/llama.cpp/Nemotron-70B-Instruct-HF-Q8_0.gguf /content/Nemotron-70B-Instruct-HF-Q8
from huggingface_hub import upload_folder
# مسار المجلد المراد رفعه
folder_path = "/content/split_model" # استبدل هذا بالمسار الصحيح
# اسم المستودع
repo_id = "sdyy/Nemotron-70B-Instruct-HF-Q8_8parts"
# اسم المجلد في المستودع (اختياري)
repo_folder_name = "split_model" # استبدل هذا بالاسم الذي تريده
# توكن Hugging Face الخاص بك
token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# رفع المجلد
upload_folder(
folder_path=folder_path,
repo_id=repo_id,
repo_type="model",
token=token,
)
|
xxhe/sft-mistral-7b-instruct-iter-1
|
xxhe
| 2024-11-05T22:07:14Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T22:04:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qfq/llama_8B_metamath_4o_sft
|
qfq
| 2024-11-05T22:02:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T21:50:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/nontoxic-bagel-34b-v0.2-GGUF
|
mradermacher
| 2024-11-05T21:53:09Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"base_model:jondurbin/nontoxic-bagel-34b-v0.2",
"base_model:quantized:jondurbin/nontoxic-bagel-34b-v0.2",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T00:26:15Z |
---
base_model: jondurbin/nontoxic-bagel-34b-v0.2
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/nontoxic-bagel-34b-v0.2-GGUF/resolve/main/nontoxic-bagel-34b-v0.2.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PaceAhh/reverse-llama2-7b
|
PaceAhh
| 2024-11-05T21:49:54Z | 45 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-03T20:49:17Z |
---
base_model: meta-llama/Llama-2-7b-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF
|
mradermacher
| 2024-11-05T21:49:11Z | 114 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:TeeZee/llama-2-7B-pirate-speech-600s",
"base_model:quantized:TeeZee/llama-2-7B-pirate-speech-600s",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T19:11:35Z |
---
base_model: TeeZee/llama-2-7B-pirate-speech-600s
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TeeZee/llama-2-7B-pirate-speech-600s
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7B-pirate-speech-600s-i1-GGUF/resolve/main/llama-2-7B-pirate-speech-600s.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
JuniperChinenye/a2
|
JuniperChinenye
| 2024-11-05T21:47:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T21:44:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ASAP_FineTuningBERT_Aug_k5_task1_organization_fold2
|
MayBashendy
| 2024-11-05T21:39:30Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T20:33:37Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k5_task1_organization_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k5_task1_organization_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5615
- Qwk: 0.6768
- Mse: 0.5615
- Rmse: 0.7493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 0.0187 | 2 | 11.8282 | 0.0 | 11.8282 | 3.4392 |
| No log | 0.0374 | 4 | 9.4456 | 0.0 | 9.4456 | 3.0734 |
| No log | 0.0561 | 6 | 6.9935 | -0.0002 | 6.9935 | 2.6445 |
| No log | 0.0748 | 8 | 5.3783 | 0.0 | 5.3783 | 2.3191 |
| No log | 0.0935 | 10 | 4.2730 | 0.0 | 4.2730 | 2.0671 |
| No log | 0.1121 | 12 | 3.2713 | 0.0113 | 3.2713 | 1.8087 |
| No log | 0.1308 | 14 | 2.4478 | 0.0 | 2.4478 | 1.5645 |
| No log | 0.1495 | 16 | 1.7946 | 0.0 | 1.7946 | 1.3396 |
| No log | 0.1682 | 18 | 1.3323 | 0.1664 | 1.3323 | 1.1543 |
| No log | 0.1869 | 20 | 1.0813 | 0.0422 | 1.0813 | 1.0398 |
| No log | 0.2056 | 22 | 0.8900 | 0.0107 | 0.8900 | 0.9434 |
| No log | 0.2243 | 24 | 0.8225 | 0.0 | 0.8225 | 0.9069 |
| No log | 0.2430 | 26 | 0.8375 | 0.0 | 0.8375 | 0.9151 |
| No log | 0.2617 | 28 | 0.9404 | 0.0 | 0.9404 | 0.9698 |
| No log | 0.2804 | 30 | 0.8506 | 0.0099 | 0.8506 | 0.9223 |
| No log | 0.2991 | 32 | 0.8038 | 0.0 | 0.8038 | 0.8966 |
| No log | 0.3178 | 34 | 0.8263 | 0.0 | 0.8263 | 0.9090 |
| No log | 0.3364 | 36 | 0.7980 | 0.0 | 0.7980 | 0.8933 |
| No log | 0.3551 | 38 | 0.7651 | 0.0 | 0.7651 | 0.8747 |
| No log | 0.3738 | 40 | 0.8168 | 0.3585 | 0.8168 | 0.9038 |
| No log | 0.3925 | 42 | 0.7956 | 0.0120 | 0.7956 | 0.8920 |
| No log | 0.4112 | 44 | 0.8541 | 0.0 | 0.8541 | 0.9242 |
| No log | 0.4299 | 46 | 0.7765 | 0.0 | 0.7765 | 0.8812 |
| No log | 0.4486 | 48 | 0.7650 | 0.0 | 0.7650 | 0.8747 |
| No log | 0.4673 | 50 | 0.7647 | 0.0 | 0.7647 | 0.8745 |
| No log | 0.4860 | 52 | 0.7859 | 0.0068 | 0.7859 | 0.8865 |
| No log | 0.5047 | 54 | 0.7406 | 0.0068 | 0.7406 | 0.8606 |
| No log | 0.5234 | 56 | 0.6866 | 0.0068 | 0.6866 | 0.8286 |
| No log | 0.5421 | 58 | 0.6743 | 0.0255 | 0.6743 | 0.8212 |
| No log | 0.5607 | 60 | 0.6636 | 0.0575 | 0.6636 | 0.8146 |
| No log | 0.5794 | 62 | 0.7563 | 0.1790 | 0.7563 | 0.8696 |
| No log | 0.5981 | 64 | 0.6120 | 0.1478 | 0.6120 | 0.7823 |
| No log | 0.6168 | 66 | 0.5702 | 0.3224 | 0.5702 | 0.7551 |
| No log | 0.6355 | 68 | 0.5272 | 0.4746 | 0.5272 | 0.7261 |
| No log | 0.6542 | 70 | 0.5080 | 0.4359 | 0.5080 | 0.7127 |
| No log | 0.6729 | 72 | 0.5930 | 0.3251 | 0.5930 | 0.7701 |
| No log | 0.6916 | 74 | 0.5396 | 0.3488 | 0.5396 | 0.7346 |
| No log | 0.7103 | 76 | 0.5478 | 0.4483 | 0.5478 | 0.7402 |
| No log | 0.7290 | 78 | 0.5984 | 0.4070 | 0.5984 | 0.7736 |
| No log | 0.7477 | 80 | 0.6091 | 0.3289 | 0.6091 | 0.7805 |
| No log | 0.7664 | 82 | 0.6723 | 0.3855 | 0.6723 | 0.8199 |
| No log | 0.7850 | 84 | 0.6171 | 0.4258 | 0.6171 | 0.7856 |
| No log | 0.8037 | 86 | 0.5413 | 0.4567 | 0.5413 | 0.7357 |
| No log | 0.8224 | 88 | 0.4990 | 0.4403 | 0.4990 | 0.7064 |
| No log | 0.8411 | 90 | 0.5166 | 0.5312 | 0.5166 | 0.7187 |
| No log | 0.8598 | 92 | 0.5014 | 0.5320 | 0.5014 | 0.7081 |
| No log | 0.8785 | 94 | 0.5110 | 0.3662 | 0.5110 | 0.7149 |
| No log | 0.8972 | 96 | 0.5113 | 0.3701 | 0.5113 | 0.7150 |
| No log | 0.9159 | 98 | 0.5044 | 0.5339 | 0.5044 | 0.7102 |
| No log | 0.9346 | 100 | 0.6252 | 0.5205 | 0.6252 | 0.7907 |
| No log | 0.9533 | 102 | 0.6186 | 0.3469 | 0.6186 | 0.7865 |
| No log | 0.9720 | 104 | 0.5491 | 0.5120 | 0.5491 | 0.7410 |
| No log | 0.9907 | 106 | 0.4532 | 0.5199 | 0.4532 | 0.6732 |
| No log | 1.0093 | 108 | 0.6157 | 0.3541 | 0.6157 | 0.7846 |
| No log | 1.0280 | 110 | 0.5994 | 0.3841 | 0.5994 | 0.7742 |
| No log | 1.0467 | 112 | 0.4688 | 0.4896 | 0.4688 | 0.6847 |
| No log | 1.0654 | 114 | 0.4748 | 0.5150 | 0.4748 | 0.6891 |
| No log | 1.0841 | 116 | 0.4681 | 0.5038 | 0.4681 | 0.6842 |
| No log | 1.1028 | 118 | 0.4701 | 0.5171 | 0.4701 | 0.6856 |
| No log | 1.1215 | 120 | 0.4750 | 0.5067 | 0.4750 | 0.6892 |
| No log | 1.1402 | 122 | 0.4992 | 0.5073 | 0.4992 | 0.7065 |
| No log | 1.1589 | 124 | 0.6159 | 0.5130 | 0.6159 | 0.7848 |
| No log | 1.1776 | 126 | 0.5906 | 0.5213 | 0.5906 | 0.7685 |
| No log | 1.1963 | 128 | 0.4837 | 0.5154 | 0.4837 | 0.6955 |
| No log | 1.2150 | 130 | 0.4581 | 0.5140 | 0.4581 | 0.6768 |
| No log | 1.2336 | 132 | 0.4546 | 0.5349 | 0.4546 | 0.6743 |
| No log | 1.2523 | 134 | 0.4904 | 0.5547 | 0.4904 | 0.7003 |
| No log | 1.2710 | 136 | 0.4322 | 0.5673 | 0.4322 | 0.6574 |
| No log | 1.2897 | 138 | 0.4199 | 0.4941 | 0.4199 | 0.6480 |
| No log | 1.3084 | 140 | 0.4273 | 0.5635 | 0.4273 | 0.6537 |
| No log | 1.3271 | 142 | 0.5680 | 0.5737 | 0.5680 | 0.7537 |
| No log | 1.3458 | 144 | 0.6503 | 0.5532 | 0.6503 | 0.8064 |
| No log | 1.3645 | 146 | 0.5092 | 0.5621 | 0.5092 | 0.7136 |
| No log | 1.3832 | 148 | 0.4435 | 0.5491 | 0.4435 | 0.6660 |
| No log | 1.4019 | 150 | 0.4246 | 0.5599 | 0.4246 | 0.6516 |
| No log | 1.4206 | 152 | 0.4222 | 0.5670 | 0.4222 | 0.6498 |
| No log | 1.4393 | 154 | 0.3999 | 0.5621 | 0.3999 | 0.6324 |
| No log | 1.4579 | 156 | 0.4006 | 0.5667 | 0.4006 | 0.6329 |
| No log | 1.4766 | 158 | 0.4815 | 0.5949 | 0.4815 | 0.6939 |
| No log | 1.4953 | 160 | 0.5115 | 0.5992 | 0.5115 | 0.7152 |
| No log | 1.5140 | 162 | 0.4448 | 0.5771 | 0.4448 | 0.6669 |
| No log | 1.5327 | 164 | 0.4060 | 0.5354 | 0.4060 | 0.6372 |
| No log | 1.5514 | 166 | 0.3954 | 0.5568 | 0.3954 | 0.6288 |
| No log | 1.5701 | 168 | 0.4976 | 0.6128 | 0.4976 | 0.7054 |
| No log | 1.5888 | 170 | 0.5744 | 0.6035 | 0.5744 | 0.7579 |
| No log | 1.6075 | 172 | 0.4905 | 0.6212 | 0.4905 | 0.7004 |
| No log | 1.6262 | 174 | 0.4012 | 0.5887 | 0.4012 | 0.6334 |
| No log | 1.6449 | 176 | 0.3918 | 0.5555 | 0.3918 | 0.6259 |
| No log | 1.6636 | 178 | 0.4818 | 0.6015 | 0.4818 | 0.6941 |
| No log | 1.6822 | 180 | 0.6624 | 0.5701 | 0.6624 | 0.8139 |
| No log | 1.7009 | 182 | 0.5784 | 0.5985 | 0.5784 | 0.7605 |
| No log | 1.7196 | 184 | 0.4582 | 0.5997 | 0.4582 | 0.6769 |
| No log | 1.7383 | 186 | 0.4274 | 0.5966 | 0.4274 | 0.6537 |
| No log | 1.7570 | 188 | 0.3929 | 0.5562 | 0.3929 | 0.6268 |
| No log | 1.7757 | 190 | 0.4036 | 0.5822 | 0.4036 | 0.6353 |
| No log | 1.7944 | 192 | 0.3847 | 0.5641 | 0.3847 | 0.6202 |
| No log | 1.8131 | 194 | 0.4141 | 0.5957 | 0.4141 | 0.6435 |
| No log | 1.8318 | 196 | 0.5904 | 0.5804 | 0.5904 | 0.7684 |
| No log | 1.8505 | 198 | 0.5375 | 0.5777 | 0.5375 | 0.7332 |
| No log | 1.8692 | 200 | 0.4652 | 0.5913 | 0.4652 | 0.6820 |
| No log | 1.8879 | 202 | 0.4110 | 0.4917 | 0.4110 | 0.6411 |
| No log | 1.9065 | 204 | 0.4156 | 0.5518 | 0.4156 | 0.6447 |
| No log | 1.9252 | 206 | 0.6167 | 0.5575 | 0.6167 | 0.7853 |
| No log | 1.9439 | 208 | 0.7701 | 0.5331 | 0.7701 | 0.8776 |
| No log | 1.9626 | 210 | 0.6919 | 0.5655 | 0.6919 | 0.8318 |
| No log | 1.9813 | 212 | 0.4336 | 0.5914 | 0.4336 | 0.6585 |
| No log | 2.0 | 214 | 0.3868 | 0.5428 | 0.3868 | 0.6219 |
| No log | 2.0187 | 216 | 0.3858 | 0.5666 | 0.3858 | 0.6211 |
| No log | 2.0374 | 218 | 0.4099 | 0.5906 | 0.4099 | 0.6402 |
| No log | 2.0561 | 220 | 0.5031 | 0.6073 | 0.5031 | 0.7093 |
| No log | 2.0748 | 222 | 0.4798 | 0.6174 | 0.4798 | 0.6927 |
| No log | 2.0935 | 224 | 0.3997 | 0.5774 | 0.3997 | 0.6322 |
| No log | 2.1121 | 226 | 0.3939 | 0.5480 | 0.3939 | 0.6276 |
| No log | 2.1308 | 228 | 0.3859 | 0.5618 | 0.3859 | 0.6212 |
| No log | 2.1495 | 230 | 0.4653 | 0.6133 | 0.4653 | 0.6821 |
| No log | 2.1682 | 232 | 0.5357 | 0.6148 | 0.5357 | 0.7319 |
| No log | 2.1869 | 234 | 0.4221 | 0.6073 | 0.4221 | 0.6497 |
| No log | 2.2056 | 236 | 0.4012 | 0.5856 | 0.4012 | 0.6334 |
| No log | 2.2243 | 238 | 0.5025 | 0.6136 | 0.5025 | 0.7089 |
| No log | 2.2430 | 240 | 0.6980 | 0.6003 | 0.6980 | 0.8355 |
| No log | 2.2617 | 242 | 0.7463 | 0.5915 | 0.7463 | 0.8639 |
| No log | 2.2804 | 244 | 0.6247 | 0.5871 | 0.6247 | 0.7904 |
| No log | 2.2991 | 246 | 0.5720 | 0.5993 | 0.5720 | 0.7563 |
| No log | 2.3178 | 248 | 0.6426 | 0.5950 | 0.6426 | 0.8016 |
| No log | 2.3364 | 250 | 0.6043 | 0.5906 | 0.6043 | 0.7774 |
| No log | 2.3551 | 252 | 0.4626 | 0.5826 | 0.4626 | 0.6801 |
| No log | 2.3738 | 254 | 0.4439 | 0.5638 | 0.4439 | 0.6662 |
| No log | 2.3925 | 256 | 0.6188 | 0.5950 | 0.6188 | 0.7866 |
| No log | 2.4112 | 258 | 0.6048 | 0.6029 | 0.6048 | 0.7777 |
| No log | 2.4299 | 260 | 0.4317 | 0.5694 | 0.4317 | 0.6570 |
| No log | 2.4486 | 262 | 0.4032 | 0.5456 | 0.4032 | 0.6350 |
| No log | 2.4673 | 264 | 0.4424 | 0.5796 | 0.4424 | 0.6651 |
| No log | 2.4860 | 266 | 0.6932 | 0.5762 | 0.6932 | 0.8326 |
| No log | 2.5047 | 268 | 0.8656 | 0.5370 | 0.8656 | 0.9304 |
| No log | 2.5234 | 270 | 0.7188 | 0.5651 | 0.7188 | 0.8478 |
| No log | 2.5421 | 272 | 0.4590 | 0.5743 | 0.4590 | 0.6775 |
| No log | 2.5607 | 274 | 0.4221 | 0.5727 | 0.4221 | 0.6497 |
| No log | 2.5794 | 276 | 0.4294 | 0.5746 | 0.4294 | 0.6553 |
| No log | 2.5981 | 278 | 0.4537 | 0.5769 | 0.4537 | 0.6736 |
| No log | 2.6168 | 280 | 0.4178 | 0.5846 | 0.4178 | 0.6464 |
| No log | 2.6355 | 282 | 0.3991 | 0.5342 | 0.3991 | 0.6318 |
| No log | 2.6542 | 284 | 0.4323 | 0.4982 | 0.4323 | 0.6575 |
| No log | 2.6729 | 286 | 0.3870 | 0.5578 | 0.3870 | 0.6221 |
| No log | 2.6916 | 288 | 0.5014 | 0.6112 | 0.5014 | 0.7081 |
| No log | 2.7103 | 290 | 0.6295 | 0.6026 | 0.6295 | 0.7934 |
| No log | 2.7290 | 292 | 0.6487 | 0.6019 | 0.6487 | 0.8054 |
| No log | 2.7477 | 294 | 0.5443 | 0.6068 | 0.5443 | 0.7377 |
| No log | 2.7664 | 296 | 0.4149 | 0.6083 | 0.4149 | 0.6441 |
| No log | 2.7850 | 298 | 0.4033 | 0.5413 | 0.4033 | 0.6350 |
| No log | 2.8037 | 300 | 0.3925 | 0.5506 | 0.3925 | 0.6265 |
| No log | 2.8224 | 302 | 0.4838 | 0.6444 | 0.4838 | 0.6956 |
| No log | 2.8411 | 304 | 0.7536 | 0.6593 | 0.7536 | 0.8681 |
| No log | 2.8598 | 306 | 0.7801 | 0.6086 | 0.7801 | 0.8832 |
| No log | 2.8785 | 308 | 0.5594 | 0.5966 | 0.5594 | 0.7479 |
| No log | 2.8972 | 310 | 0.4200 | 0.5178 | 0.4200 | 0.6481 |
| No log | 2.9159 | 312 | 0.3945 | 0.5285 | 0.3945 | 0.6281 |
| No log | 2.9346 | 314 | 0.3842 | 0.5345 | 0.3842 | 0.6199 |
| No log | 2.9533 | 316 | 0.3799 | 0.5662 | 0.3799 | 0.6164 |
| No log | 2.9720 | 318 | 0.3983 | 0.6046 | 0.3983 | 0.6311 |
| No log | 2.9907 | 320 | 0.3966 | 0.5960 | 0.3966 | 0.6297 |
| No log | 3.0093 | 322 | 0.3865 | 0.5834 | 0.3865 | 0.6217 |
| No log | 3.0280 | 324 | 0.4193 | 0.6198 | 0.4193 | 0.6475 |
| No log | 3.0467 | 326 | 0.6205 | 0.6929 | 0.6205 | 0.7877 |
| No log | 3.0654 | 328 | 0.7082 | 0.7049 | 0.7082 | 0.8416 |
| No log | 3.0841 | 330 | 0.4979 | 0.6912 | 0.4979 | 0.7056 |
| No log | 3.1028 | 332 | 0.4659 | 0.6671 | 0.4659 | 0.6825 |
| No log | 3.1215 | 334 | 0.5928 | 0.6865 | 0.5928 | 0.7699 |
| No log | 3.1402 | 336 | 0.6530 | 0.6947 | 0.6530 | 0.8081 |
| No log | 3.1589 | 338 | 0.4987 | 0.6793 | 0.4987 | 0.7062 |
| No log | 3.1776 | 340 | 0.4530 | 0.6431 | 0.4530 | 0.6731 |
| No log | 3.1963 | 342 | 0.4860 | 0.6297 | 0.4860 | 0.6971 |
| No log | 3.2150 | 344 | 0.5868 | 0.6663 | 0.5868 | 0.7661 |
| No log | 3.2336 | 346 | 0.6991 | 0.6900 | 0.6991 | 0.8361 |
| No log | 3.2523 | 348 | 0.5081 | 0.6940 | 0.5081 | 0.7128 |
| No log | 3.2710 | 350 | 0.4231 | 0.6303 | 0.4231 | 0.6505 |
| No log | 3.2897 | 352 | 0.4100 | 0.6289 | 0.4100 | 0.6403 |
| No log | 3.3084 | 354 | 0.4129 | 0.6492 | 0.4129 | 0.6426 |
| No log | 3.3271 | 356 | 0.3901 | 0.6285 | 0.3901 | 0.6246 |
| No log | 3.3458 | 358 | 0.3819 | 0.5992 | 0.3819 | 0.6180 |
| No log | 3.3645 | 360 | 0.4602 | 0.4929 | 0.4602 | 0.6784 |
| No log | 3.3832 | 362 | 0.3975 | 0.5673 | 0.3975 | 0.6305 |
| No log | 3.4019 | 364 | 0.4384 | 0.6640 | 0.4384 | 0.6621 |
| No log | 3.4206 | 366 | 0.5032 | 0.6977 | 0.5032 | 0.7094 |
| No log | 3.4393 | 368 | 0.4586 | 0.6674 | 0.4586 | 0.6772 |
| No log | 3.4579 | 370 | 0.4652 | 0.6663 | 0.4652 | 0.6820 |
| No log | 3.4766 | 372 | 0.5169 | 0.6796 | 0.5169 | 0.7189 |
| No log | 3.4953 | 374 | 0.6688 | 0.6703 | 0.6688 | 0.8178 |
| No log | 3.5140 | 376 | 0.5395 | 0.6711 | 0.5395 | 0.7345 |
| No log | 3.5327 | 378 | 0.4407 | 0.6217 | 0.4407 | 0.6638 |
| No log | 3.5514 | 380 | 0.4947 | 0.6769 | 0.4947 | 0.7034 |
| No log | 3.5701 | 382 | 0.7478 | 0.6968 | 0.7478 | 0.8648 |
| No log | 3.5888 | 384 | 0.7129 | 0.6997 | 0.7129 | 0.8444 |
| No log | 3.6075 | 386 | 0.4234 | 0.6113 | 0.4234 | 0.6507 |
| No log | 3.6262 | 388 | 0.3962 | 0.5280 | 0.3962 | 0.6294 |
| No log | 3.6449 | 390 | 0.3911 | 0.5337 | 0.3911 | 0.6253 |
| No log | 3.6636 | 392 | 0.3962 | 0.6009 | 0.3962 | 0.6294 |
| No log | 3.6822 | 394 | 0.3994 | 0.6002 | 0.3994 | 0.6319 |
| No log | 3.7009 | 396 | 0.4194 | 0.6146 | 0.4194 | 0.6476 |
| No log | 3.7196 | 398 | 0.5585 | 0.6833 | 0.5585 | 0.7473 |
| No log | 3.7383 | 400 | 0.5971 | 0.6876 | 0.5971 | 0.7727 |
| No log | 3.7570 | 402 | 0.5005 | 0.6404 | 0.5005 | 0.7075 |
| No log | 3.7757 | 404 | 0.5542 | 0.6423 | 0.5542 | 0.7444 |
| No log | 3.7944 | 406 | 0.6900 | 0.6630 | 0.6900 | 0.8307 |
| No log | 3.8131 | 408 | 0.7025 | 0.7027 | 0.7025 | 0.8381 |
| No log | 3.8318 | 410 | 0.6722 | 0.7035 | 0.6722 | 0.8199 |
| No log | 3.8505 | 412 | 0.4954 | 0.7007 | 0.4954 | 0.7038 |
| No log | 3.8692 | 414 | 0.4148 | 0.5837 | 0.4148 | 0.6441 |
| No log | 3.8879 | 416 | 0.4001 | 0.5671 | 0.4001 | 0.6326 |
| No log | 3.9065 | 418 | 0.3942 | 0.5689 | 0.3942 | 0.6279 |
| No log | 3.9252 | 420 | 0.3949 | 0.5710 | 0.3949 | 0.6284 |
| No log | 3.9439 | 422 | 0.4779 | 0.6355 | 0.4779 | 0.6913 |
| No log | 3.9626 | 424 | 0.5996 | 0.6855 | 0.5996 | 0.7743 |
| No log | 3.9813 | 426 | 0.4802 | 0.6261 | 0.4802 | 0.6930 |
| No log | 4.0 | 428 | 0.4583 | 0.6067 | 0.4583 | 0.6770 |
| No log | 4.0187 | 430 | 0.5142 | 0.6388 | 0.5142 | 0.7171 |
| No log | 4.0374 | 432 | 0.7043 | 0.7023 | 0.7043 | 0.8392 |
| No log | 4.0561 | 434 | 0.6648 | 0.6954 | 0.6648 | 0.8154 |
| No log | 4.0748 | 436 | 0.5047 | 0.6585 | 0.5047 | 0.7105 |
| No log | 4.0935 | 438 | 0.5087 | 0.6805 | 0.5087 | 0.7132 |
| No log | 4.1121 | 440 | 0.5302 | 0.6977 | 0.5302 | 0.7281 |
| No log | 4.1308 | 442 | 0.4651 | 0.6660 | 0.4651 | 0.6820 |
| No log | 4.1495 | 444 | 0.4828 | 0.6818 | 0.4828 | 0.6948 |
| No log | 4.1682 | 446 | 0.4300 | 0.6221 | 0.4300 | 0.6557 |
| No log | 4.1869 | 448 | 0.4131 | 0.5618 | 0.4131 | 0.6427 |
| No log | 4.2056 | 450 | 0.4277 | 0.6237 | 0.4277 | 0.6540 |
| No log | 4.2243 | 452 | 0.4659 | 0.6218 | 0.4659 | 0.6825 |
| No log | 4.2430 | 454 | 0.4467 | 0.6175 | 0.4467 | 0.6684 |
| No log | 4.2617 | 456 | 0.4970 | 0.6411 | 0.4970 | 0.7050 |
| No log | 4.2804 | 458 | 0.5869 | 0.7160 | 0.5869 | 0.7661 |
| No log | 4.2991 | 460 | 0.6626 | 0.6976 | 0.6626 | 0.8140 |
| No log | 4.3178 | 462 | 0.6230 | 0.7055 | 0.6230 | 0.7893 |
| No log | 4.3364 | 464 | 0.4883 | 0.6534 | 0.4883 | 0.6988 |
| No log | 4.3551 | 466 | 0.4495 | 0.6291 | 0.4495 | 0.6704 |
| No log | 4.3738 | 468 | 0.4269 | 0.6077 | 0.4269 | 0.6534 |
| No log | 4.3925 | 470 | 0.5558 | 0.7130 | 0.5558 | 0.7455 |
| No log | 4.4112 | 472 | 0.6687 | 0.7026 | 0.6687 | 0.8177 |
| No log | 4.4299 | 474 | 0.5277 | 0.7158 | 0.5277 | 0.7264 |
| No log | 4.4486 | 476 | 0.4089 | 0.6049 | 0.4089 | 0.6395 |
| No log | 4.4673 | 478 | 0.4097 | 0.6136 | 0.4097 | 0.6401 |
| No log | 4.4860 | 480 | 0.5236 | 0.7139 | 0.5236 | 0.7236 |
| No log | 4.5047 | 482 | 0.5610 | 0.7205 | 0.5610 | 0.7490 |
| No log | 4.5234 | 484 | 0.4215 | 0.6375 | 0.4215 | 0.6493 |
| No log | 4.5421 | 486 | 0.4090 | 0.5428 | 0.4090 | 0.6396 |
| No log | 4.5607 | 488 | 0.4040 | 0.5511 | 0.4040 | 0.6356 |
| No log | 4.5794 | 490 | 0.4498 | 0.6651 | 0.4498 | 0.6707 |
| No log | 4.5981 | 492 | 0.7271 | 0.7002 | 0.7271 | 0.8527 |
| No log | 4.6168 | 494 | 0.7318 | 0.6926 | 0.7318 | 0.8554 |
| No log | 4.6355 | 496 | 0.5008 | 0.6780 | 0.5008 | 0.7077 |
| No log | 4.6542 | 498 | 0.4091 | 0.5723 | 0.4091 | 0.6396 |
| 0.5316 | 4.6729 | 500 | 0.4127 | 0.5834 | 0.4127 | 0.6424 |
| 0.5316 | 4.6916 | 502 | 0.5203 | 0.6901 | 0.5203 | 0.7213 |
| 0.5316 | 4.7103 | 504 | 0.5924 | 0.7037 | 0.5924 | 0.7697 |
| 0.5316 | 4.7290 | 506 | 0.4753 | 0.6824 | 0.4753 | 0.6894 |
| 0.5316 | 4.7477 | 508 | 0.4071 | 0.5981 | 0.4071 | 0.6381 |
| 0.5316 | 4.7664 | 510 | 0.4298 | 0.6318 | 0.4298 | 0.6556 |
| 0.5316 | 4.7850 | 512 | 0.6057 | 0.7067 | 0.6057 | 0.7783 |
| 0.5316 | 4.8037 | 514 | 0.6563 | 0.7150 | 0.6563 | 0.8101 |
| 0.5316 | 4.8224 | 516 | 0.5594 | 0.7003 | 0.5594 | 0.7480 |
| 0.5316 | 4.8411 | 518 | 0.4846 | 0.6876 | 0.4846 | 0.6961 |
| 0.5316 | 4.8598 | 520 | 0.4701 | 0.6665 | 0.4701 | 0.6856 |
| 0.5316 | 4.8785 | 522 | 0.5456 | 0.7071 | 0.5456 | 0.7386 |
| 0.5316 | 4.8972 | 524 | 0.6385 | 0.7149 | 0.6385 | 0.7991 |
| 0.5316 | 4.9159 | 526 | 0.5622 | 0.7077 | 0.5622 | 0.7498 |
| 0.5316 | 4.9346 | 528 | 0.4429 | 0.6457 | 0.4429 | 0.6655 |
| 0.5316 | 4.9533 | 530 | 0.4629 | 0.6600 | 0.4629 | 0.6804 |
| 0.5316 | 4.9720 | 532 | 0.5119 | 0.7073 | 0.5119 | 0.7154 |
| 0.5316 | 4.9907 | 534 | 0.4723 | 0.6642 | 0.4723 | 0.6872 |
| 0.5316 | 5.0093 | 536 | 0.4909 | 0.6908 | 0.4909 | 0.7007 |
| 0.5316 | 5.0280 | 538 | 0.5399 | 0.6994 | 0.5399 | 0.7348 |
| 0.5316 | 5.0467 | 540 | 0.6162 | 0.7094 | 0.6162 | 0.7850 |
| 0.5316 | 5.0654 | 542 | 0.5765 | 0.7158 | 0.5765 | 0.7593 |
| 0.5316 | 5.0841 | 544 | 0.5446 | 0.7047 | 0.5446 | 0.7380 |
| 0.5316 | 5.1028 | 546 | 0.4575 | 0.6554 | 0.4575 | 0.6764 |
| 0.5316 | 5.1215 | 548 | 0.4749 | 0.6575 | 0.4749 | 0.6892 |
| 0.5316 | 5.1402 | 550 | 0.5240 | 0.6913 | 0.5240 | 0.7239 |
| 0.5316 | 5.1589 | 552 | 0.6977 | 0.6910 | 0.6977 | 0.8353 |
| 0.5316 | 5.1776 | 554 | 0.6397 | 0.6923 | 0.6397 | 0.7998 |
| 0.5316 | 5.1963 | 556 | 0.4430 | 0.6131 | 0.4430 | 0.6656 |
| 0.5316 | 5.2150 | 558 | 0.4221 | 0.5456 | 0.4221 | 0.6497 |
| 0.5316 | 5.2336 | 560 | 0.4398 | 0.5996 | 0.4398 | 0.6632 |
| 0.5316 | 5.2523 | 562 | 0.6185 | 0.6933 | 0.6185 | 0.7864 |
| 0.5316 | 5.2710 | 564 | 0.6709 | 0.7066 | 0.6709 | 0.8191 |
| 0.5316 | 5.2897 | 566 | 0.5045 | 0.6715 | 0.5045 | 0.7103 |
| 0.5316 | 5.3084 | 568 | 0.4230 | 0.6029 | 0.4230 | 0.6504 |
| 0.5316 | 5.3271 | 570 | 0.4542 | 0.6298 | 0.4542 | 0.6740 |
| 0.5316 | 5.3458 | 572 | 0.5521 | 0.6968 | 0.5521 | 0.7430 |
| 0.5316 | 5.3645 | 574 | 0.5495 | 0.6913 | 0.5495 | 0.7413 |
| 0.5316 | 5.3832 | 576 | 0.5656 | 0.6955 | 0.5656 | 0.7520 |
| 0.5316 | 5.4019 | 578 | 0.5671 | 0.7015 | 0.5671 | 0.7531 |
| 0.5316 | 5.4206 | 580 | 0.6673 | 0.7178 | 0.6673 | 0.8169 |
| 0.5316 | 5.4393 | 582 | 0.5750 | 0.7001 | 0.5750 | 0.7583 |
| 0.5316 | 5.4579 | 584 | 0.5240 | 0.6672 | 0.5240 | 0.7239 |
| 0.5316 | 5.4766 | 586 | 0.5630 | 0.6882 | 0.5630 | 0.7503 |
| 0.5316 | 5.4953 | 588 | 0.6095 | 0.6869 | 0.6095 | 0.7807 |
| 0.5316 | 5.5140 | 590 | 0.6063 | 0.6871 | 0.6063 | 0.7787 |
| 0.5316 | 5.5327 | 592 | 0.5304 | 0.6594 | 0.5304 | 0.7283 |
| 0.5316 | 5.5514 | 594 | 0.4835 | 0.6115 | 0.4835 | 0.6953 |
| 0.5316 | 5.5701 | 596 | 0.5825 | 0.6852 | 0.5825 | 0.7632 |
| 0.5316 | 5.5888 | 598 | 0.6252 | 0.7005 | 0.6252 | 0.7907 |
| 0.5316 | 5.6075 | 600 | 0.5187 | 0.6680 | 0.5187 | 0.7202 |
| 0.5316 | 5.6262 | 602 | 0.4715 | 0.6254 | 0.4715 | 0.6866 |
| 0.5316 | 5.6449 | 604 | 0.4764 | 0.6405 | 0.4764 | 0.6902 |
| 0.5316 | 5.6636 | 606 | 0.6056 | 0.7117 | 0.6056 | 0.7782 |
| 0.5316 | 5.6822 | 608 | 0.6509 | 0.7056 | 0.6509 | 0.8068 |
| 0.5316 | 5.7009 | 610 | 0.5918 | 0.7113 | 0.5918 | 0.7693 |
| 0.5316 | 5.7196 | 612 | 0.4843 | 0.6692 | 0.4843 | 0.6959 |
| 0.5316 | 5.7383 | 614 | 0.4883 | 0.6720 | 0.4883 | 0.6988 |
| 0.5316 | 5.7570 | 616 | 0.5444 | 0.6973 | 0.5444 | 0.7378 |
| 0.5316 | 5.7757 | 618 | 0.5505 | 0.7044 | 0.5505 | 0.7419 |
| 0.5316 | 5.7944 | 620 | 0.4915 | 0.6739 | 0.4915 | 0.7011 |
| 0.5316 | 5.8131 | 622 | 0.4582 | 0.6373 | 0.4582 | 0.6769 |
| 0.5316 | 5.8318 | 624 | 0.4768 | 0.6649 | 0.4768 | 0.6905 |
| 0.5316 | 5.8505 | 626 | 0.6611 | 0.7028 | 0.6611 | 0.8131 |
| 0.5316 | 5.8692 | 628 | 0.7303 | 0.6964 | 0.7303 | 0.8546 |
| 0.5316 | 5.8879 | 630 | 0.5647 | 0.6906 | 0.5647 | 0.7515 |
| 0.5316 | 5.9065 | 632 | 0.4980 | 0.6684 | 0.4980 | 0.7057 |
| 0.5316 | 5.9252 | 634 | 0.5363 | 0.6907 | 0.5363 | 0.7323 |
| 0.5316 | 5.9439 | 636 | 0.6472 | 0.7001 | 0.6472 | 0.8045 |
| 0.5316 | 5.9626 | 638 | 0.7636 | 0.6939 | 0.7636 | 0.8738 |
| 0.5316 | 5.9813 | 640 | 0.7484 | 0.7007 | 0.7484 | 0.8651 |
| 0.5316 | 6.0 | 642 | 0.5880 | 0.7004 | 0.5880 | 0.7668 |
| 0.5316 | 6.0187 | 644 | 0.4689 | 0.6275 | 0.4689 | 0.6848 |
| 0.5316 | 6.0374 | 646 | 0.4607 | 0.6181 | 0.4607 | 0.6788 |
| 0.5316 | 6.0561 | 648 | 0.5677 | 0.6943 | 0.5677 | 0.7535 |
| 0.5316 | 6.0748 | 650 | 0.7263 | 0.6973 | 0.7263 | 0.8522 |
| 0.5316 | 6.0935 | 652 | 0.6477 | 0.6956 | 0.6477 | 0.8048 |
| 0.5316 | 6.1121 | 654 | 0.5297 | 0.6691 | 0.5297 | 0.7278 |
| 0.5316 | 6.1308 | 656 | 0.5139 | 0.6569 | 0.5139 | 0.7169 |
| 0.5316 | 6.1495 | 658 | 0.6268 | 0.6902 | 0.6268 | 0.7917 |
| 0.5316 | 6.1682 | 660 | 0.6343 | 0.6883 | 0.6343 | 0.7964 |
| 0.5316 | 6.1869 | 662 | 0.5053 | 0.6284 | 0.5053 | 0.7108 |
| 0.5316 | 6.2056 | 664 | 0.4878 | 0.6251 | 0.4878 | 0.6985 |
| 0.5316 | 6.2243 | 666 | 0.5540 | 0.6580 | 0.5540 | 0.7443 |
| 0.5316 | 6.2430 | 668 | 0.6927 | 0.6890 | 0.6927 | 0.8323 |
| 0.5316 | 6.2617 | 670 | 0.7222 | 0.6903 | 0.7222 | 0.8498 |
| 0.5316 | 6.2804 | 672 | 0.5504 | 0.6681 | 0.5504 | 0.7419 |
| 0.5316 | 6.2991 | 674 | 0.4987 | 0.6431 | 0.4987 | 0.7062 |
| 0.5316 | 6.3178 | 676 | 0.5321 | 0.6602 | 0.5321 | 0.7295 |
| 0.5316 | 6.3364 | 678 | 0.5474 | 0.6591 | 0.5474 | 0.7399 |
| 0.5316 | 6.3551 | 680 | 0.5298 | 0.6502 | 0.5298 | 0.7279 |
| 0.5316 | 6.3738 | 682 | 0.5394 | 0.6521 | 0.5394 | 0.7345 |
| 0.5316 | 6.3925 | 684 | 0.5734 | 0.6675 | 0.5734 | 0.7572 |
| 0.5316 | 6.4112 | 686 | 0.6805 | 0.6870 | 0.6805 | 0.8249 |
| 0.5316 | 6.4299 | 688 | 0.5795 | 0.6702 | 0.5795 | 0.7612 |
| 0.5316 | 6.4486 | 690 | 0.5605 | 0.6640 | 0.5605 | 0.7486 |
| 0.5316 | 6.4673 | 692 | 0.5266 | 0.6387 | 0.5266 | 0.7257 |
| 0.5316 | 6.4860 | 694 | 0.5751 | 0.6710 | 0.5751 | 0.7584 |
| 0.5316 | 6.5047 | 696 | 0.5937 | 0.6882 | 0.5937 | 0.7705 |
| 0.5316 | 6.5234 | 698 | 0.5884 | 0.6870 | 0.5884 | 0.7670 |
| 0.5316 | 6.5421 | 700 | 0.5023 | 0.6286 | 0.5023 | 0.7087 |
| 0.5316 | 6.5607 | 702 | 0.4738 | 0.5971 | 0.4738 | 0.6883 |
| 0.5316 | 6.5794 | 704 | 0.5180 | 0.6419 | 0.5180 | 0.7197 |
| 0.5316 | 6.5981 | 706 | 0.6217 | 0.7048 | 0.6217 | 0.7885 |
| 0.5316 | 6.6168 | 708 | 0.5907 | 0.6973 | 0.5907 | 0.7686 |
| 0.5316 | 6.6355 | 710 | 0.5687 | 0.6955 | 0.5687 | 0.7541 |
| 0.5316 | 6.6542 | 712 | 0.6327 | 0.6940 | 0.6327 | 0.7954 |
| 0.5316 | 6.6729 | 714 | 0.6855 | 0.6877 | 0.6855 | 0.8280 |
| 0.5316 | 6.6916 | 716 | 0.6280 | 0.7036 | 0.6280 | 0.7925 |
| 0.5316 | 6.7103 | 718 | 0.5043 | 0.6300 | 0.5043 | 0.7101 |
| 0.5316 | 6.7290 | 720 | 0.4958 | 0.6221 | 0.4958 | 0.7041 |
| 0.5316 | 6.7477 | 722 | 0.6012 | 0.7092 | 0.6012 | 0.7754 |
| 0.5316 | 6.7664 | 724 | 0.7502 | 0.6871 | 0.7502 | 0.8662 |
| 0.5316 | 6.7850 | 726 | 0.6799 | 0.6958 | 0.6799 | 0.8246 |
| 0.5316 | 6.8037 | 728 | 0.4959 | 0.6479 | 0.4959 | 0.7042 |
| 0.5316 | 6.8224 | 730 | 0.4405 | 0.5751 | 0.4405 | 0.6637 |
| 0.5316 | 6.8411 | 732 | 0.4477 | 0.5866 | 0.4477 | 0.6691 |
| 0.5316 | 6.8598 | 734 | 0.5449 | 0.6832 | 0.5449 | 0.7382 |
| 0.5316 | 6.8785 | 736 | 0.6093 | 0.7130 | 0.6093 | 0.7805 |
| 0.5316 | 6.8972 | 738 | 0.5420 | 0.6856 | 0.5420 | 0.7362 |
| 0.5316 | 6.9159 | 740 | 0.4607 | 0.6041 | 0.4607 | 0.6787 |
| 0.5316 | 6.9346 | 742 | 0.4563 | 0.5874 | 0.4563 | 0.6755 |
| 0.5316 | 6.9533 | 744 | 0.5143 | 0.6732 | 0.5143 | 0.7171 |
| 0.5316 | 6.9720 | 746 | 0.5908 | 0.7093 | 0.5908 | 0.7687 |
| 0.5316 | 6.9907 | 748 | 0.5481 | 0.7021 | 0.5481 | 0.7404 |
| 0.5316 | 7.0093 | 750 | 0.4877 | 0.6381 | 0.4877 | 0.6984 |
| 0.5316 | 7.0280 | 752 | 0.4968 | 0.6544 | 0.4968 | 0.7048 |
| 0.5316 | 7.0467 | 754 | 0.5699 | 0.6966 | 0.5699 | 0.7549 |
| 0.5316 | 7.0654 | 756 | 0.6870 | 0.7123 | 0.6870 | 0.8288 |
| 0.5316 | 7.0841 | 758 | 0.6323 | 0.7061 | 0.6323 | 0.7952 |
| 0.5316 | 7.1028 | 760 | 0.4990 | 0.6245 | 0.4990 | 0.7064 |
| 0.5316 | 7.1215 | 762 | 0.4717 | 0.5920 | 0.4717 | 0.6868 |
| 0.5316 | 7.1402 | 764 | 0.5091 | 0.6518 | 0.5091 | 0.7135 |
| 0.5316 | 7.1589 | 766 | 0.5631 | 0.6917 | 0.5631 | 0.7504 |
| 0.5316 | 7.1776 | 768 | 0.6180 | 0.7111 | 0.6180 | 0.7861 |
| 0.5316 | 7.1963 | 770 | 0.5466 | 0.6760 | 0.5466 | 0.7393 |
| 0.5316 | 7.2150 | 772 | 0.5239 | 0.6422 | 0.5239 | 0.7238 |
| 0.5316 | 7.2336 | 774 | 0.6146 | 0.6998 | 0.6146 | 0.7840 |
| 0.5316 | 7.2523 | 776 | 0.6608 | 0.7047 | 0.6608 | 0.8129 |
| 0.5316 | 7.2710 | 778 | 0.5815 | 0.6880 | 0.5815 | 0.7626 |
| 0.5316 | 7.2897 | 780 | 0.5818 | 0.6966 | 0.5818 | 0.7628 |
| 0.5316 | 7.3084 | 782 | 0.7087 | 0.7037 | 0.7087 | 0.8419 |
| 0.5316 | 7.3271 | 784 | 0.7176 | 0.7024 | 0.7176 | 0.8471 |
| 0.5316 | 7.3458 | 786 | 0.6344 | 0.7076 | 0.6344 | 0.7965 |
| 0.5316 | 7.3645 | 788 | 0.6432 | 0.7144 | 0.6432 | 0.8020 |
| 0.5316 | 7.3832 | 790 | 0.5922 | 0.7027 | 0.5922 | 0.7696 |
| 0.5316 | 7.4019 | 792 | 0.5171 | 0.6650 | 0.5171 | 0.7191 |
| 0.5316 | 7.4206 | 794 | 0.5403 | 0.6791 | 0.5403 | 0.7351 |
| 0.5316 | 7.4393 | 796 | 0.6197 | 0.7080 | 0.6197 | 0.7872 |
| 0.5316 | 7.4579 | 798 | 0.5789 | 0.6957 | 0.5789 | 0.7609 |
| 0.5316 | 7.4766 | 800 | 0.4977 | 0.6440 | 0.4977 | 0.7055 |
| 0.5316 | 7.4953 | 802 | 0.5129 | 0.6520 | 0.5129 | 0.7162 |
| 0.5316 | 7.5140 | 804 | 0.6368 | 0.7057 | 0.6368 | 0.7980 |
| 0.5316 | 7.5327 | 806 | 0.7137 | 0.6948 | 0.7137 | 0.8448 |
| 0.5316 | 7.5514 | 808 | 0.6228 | 0.6944 | 0.6228 | 0.7892 |
| 0.5316 | 7.5701 | 810 | 0.4844 | 0.6189 | 0.4844 | 0.6960 |
| 0.5316 | 7.5888 | 812 | 0.4571 | 0.5807 | 0.4571 | 0.6761 |
| 0.5316 | 7.6075 | 814 | 0.4673 | 0.5916 | 0.4673 | 0.6836 |
| 0.5316 | 7.6262 | 816 | 0.5483 | 0.6694 | 0.5483 | 0.7404 |
| 0.5316 | 7.6449 | 818 | 0.7301 | 0.6775 | 0.7301 | 0.8545 |
| 0.5316 | 7.6636 | 820 | 0.7522 | 0.6921 | 0.7522 | 0.8673 |
| 0.5316 | 7.6822 | 822 | 0.6251 | 0.7006 | 0.6251 | 0.7906 |
| 0.5316 | 7.7009 | 824 | 0.4771 | 0.6181 | 0.4771 | 0.6907 |
| 0.5316 | 7.7196 | 826 | 0.4510 | 0.5645 | 0.4510 | 0.6715 |
| 0.5316 | 7.7383 | 828 | 0.4507 | 0.5850 | 0.4507 | 0.6714 |
| 0.5316 | 7.7570 | 830 | 0.4919 | 0.6440 | 0.4919 | 0.7013 |
| 0.5316 | 7.7757 | 832 | 0.6126 | 0.7071 | 0.6126 | 0.7827 |
| 0.5316 | 7.7944 | 834 | 0.6352 | 0.7089 | 0.6352 | 0.7970 |
| 0.5316 | 7.8131 | 836 | 0.5513 | 0.6887 | 0.5513 | 0.7425 |
| 0.5316 | 7.8318 | 838 | 0.4945 | 0.6309 | 0.4945 | 0.7032 |
| 0.5316 | 7.8505 | 840 | 0.5115 | 0.6418 | 0.5115 | 0.7152 |
| 0.5316 | 7.8692 | 842 | 0.5847 | 0.6838 | 0.5847 | 0.7646 |
| 0.5316 | 7.8879 | 844 | 0.7153 | 0.7045 | 0.7153 | 0.8458 |
| 0.5316 | 7.9065 | 846 | 0.7274 | 0.7019 | 0.7274 | 0.8529 |
| 0.5316 | 7.9252 | 848 | 0.6176 | 0.6893 | 0.6176 | 0.7859 |
| 0.5316 | 7.9439 | 850 | 0.5218 | 0.6284 | 0.5218 | 0.7224 |
| 0.5316 | 7.9626 | 852 | 0.5189 | 0.6252 | 0.5189 | 0.7204 |
| 0.5316 | 7.9813 | 854 | 0.5894 | 0.6740 | 0.5894 | 0.7677 |
| 0.5316 | 8.0 | 856 | 0.6887 | 0.6893 | 0.6887 | 0.8299 |
| 0.5316 | 8.0187 | 858 | 0.6827 | 0.6935 | 0.6827 | 0.8263 |
| 0.5316 | 8.0374 | 860 | 0.6291 | 0.6837 | 0.6291 | 0.7931 |
| 0.5316 | 8.0561 | 862 | 0.5574 | 0.6522 | 0.5574 | 0.7466 |
| 0.5316 | 8.0748 | 864 | 0.5502 | 0.6480 | 0.5502 | 0.7418 |
| 0.5316 | 8.0935 | 866 | 0.5922 | 0.6823 | 0.5922 | 0.7695 |
| 0.5316 | 8.1121 | 868 | 0.6314 | 0.6924 | 0.6314 | 0.7946 |
| 0.5316 | 8.1308 | 870 | 0.5905 | 0.6813 | 0.5905 | 0.7684 |
| 0.5316 | 8.1495 | 872 | 0.5131 | 0.6382 | 0.5131 | 0.7163 |
| 0.5316 | 8.1682 | 874 | 0.4886 | 0.6038 | 0.4886 | 0.6990 |
| 0.5316 | 8.1869 | 876 | 0.5124 | 0.6398 | 0.5124 | 0.7158 |
| 0.5316 | 8.2056 | 878 | 0.5837 | 0.6842 | 0.5837 | 0.7640 |
| 0.5316 | 8.2243 | 880 | 0.6645 | 0.6960 | 0.6645 | 0.8152 |
| 0.5316 | 8.2430 | 882 | 0.6773 | 0.6943 | 0.6773 | 0.8230 |
| 0.5316 | 8.2617 | 884 | 0.6535 | 0.6914 | 0.6535 | 0.8084 |
| 0.5316 | 8.2804 | 886 | 0.6279 | 0.6941 | 0.6279 | 0.7924 |
| 0.5316 | 8.2991 | 888 | 0.5607 | 0.6720 | 0.5607 | 0.7488 |
| 0.5316 | 8.3178 | 890 | 0.5433 | 0.6683 | 0.5433 | 0.7371 |
| 0.5316 | 8.3364 | 892 | 0.5897 | 0.6856 | 0.5897 | 0.7679 |
| 0.5316 | 8.3551 | 894 | 0.6784 | 0.7046 | 0.6784 | 0.8237 |
| 0.5316 | 8.3738 | 896 | 0.7183 | 0.7033 | 0.7183 | 0.8475 |
| 0.5316 | 8.3925 | 898 | 0.7041 | 0.7037 | 0.7041 | 0.8391 |
| 0.5316 | 8.4112 | 900 | 0.6389 | 0.6899 | 0.6389 | 0.7993 |
| 0.5316 | 8.4299 | 902 | 0.5549 | 0.6771 | 0.5549 | 0.7449 |
| 0.5316 | 8.4486 | 904 | 0.5263 | 0.6533 | 0.5263 | 0.7255 |
| 0.5316 | 8.4673 | 906 | 0.5234 | 0.6560 | 0.5234 | 0.7235 |
| 0.5316 | 8.4860 | 908 | 0.5253 | 0.6660 | 0.5253 | 0.7248 |
| 0.5316 | 8.5047 | 910 | 0.5084 | 0.6499 | 0.5084 | 0.7130 |
| 0.5316 | 8.5234 | 912 | 0.5052 | 0.6428 | 0.5052 | 0.7108 |
| 0.5316 | 8.5421 | 914 | 0.5377 | 0.6684 | 0.5377 | 0.7333 |
| 0.5316 | 8.5607 | 916 | 0.5741 | 0.6940 | 0.5741 | 0.7577 |
| 0.5316 | 8.5794 | 918 | 0.5507 | 0.6692 | 0.5507 | 0.7421 |
| 0.5316 | 8.5981 | 920 | 0.5250 | 0.6576 | 0.5250 | 0.7246 |
| 0.5316 | 8.6168 | 922 | 0.5392 | 0.6619 | 0.5392 | 0.7343 |
| 0.5316 | 8.6355 | 924 | 0.5703 | 0.6851 | 0.5703 | 0.7552 |
| 0.5316 | 8.6542 | 926 | 0.5862 | 0.6972 | 0.5862 | 0.7657 |
| 0.5316 | 8.6729 | 928 | 0.6137 | 0.6931 | 0.6137 | 0.7834 |
| 0.5316 | 8.6916 | 930 | 0.5877 | 0.6899 | 0.5877 | 0.7666 |
| 0.5316 | 8.7103 | 932 | 0.5524 | 0.6634 | 0.5524 | 0.7432 |
| 0.5316 | 8.7290 | 934 | 0.5509 | 0.6632 | 0.5509 | 0.7422 |
| 0.5316 | 8.7477 | 936 | 0.5784 | 0.6885 | 0.5784 | 0.7605 |
| 0.5316 | 8.7664 | 938 | 0.6493 | 0.6971 | 0.6493 | 0.8058 |
| 0.5316 | 8.7850 | 940 | 0.6858 | 0.7018 | 0.6858 | 0.8281 |
| 0.5316 | 8.8037 | 942 | 0.6454 | 0.7003 | 0.6454 | 0.8034 |
| 0.5316 | 8.8224 | 944 | 0.5753 | 0.6845 | 0.5753 | 0.7585 |
| 0.5316 | 8.8411 | 946 | 0.5274 | 0.6450 | 0.5274 | 0.7262 |
| 0.5316 | 8.8598 | 948 | 0.5250 | 0.6347 | 0.5250 | 0.7245 |
| 0.5316 | 8.8785 | 950 | 0.5547 | 0.6582 | 0.5547 | 0.7448 |
| 0.5316 | 8.8972 | 952 | 0.6114 | 0.6831 | 0.6114 | 0.7819 |
| 0.5316 | 8.9159 | 954 | 0.6659 | 0.7025 | 0.6659 | 0.8160 |
| 0.5316 | 8.9346 | 956 | 0.6860 | 0.7039 | 0.6860 | 0.8283 |
| 0.5316 | 8.9533 | 958 | 0.6468 | 0.6927 | 0.6468 | 0.8042 |
| 0.5316 | 8.9720 | 960 | 0.5718 | 0.6839 | 0.5718 | 0.7562 |
| 0.5316 | 8.9907 | 962 | 0.5151 | 0.6369 | 0.5151 | 0.7177 |
| 0.5316 | 9.0093 | 964 | 0.5068 | 0.6275 | 0.5068 | 0.7119 |
| 0.5316 | 9.0280 | 966 | 0.5313 | 0.6394 | 0.5313 | 0.7289 |
| 0.5316 | 9.0467 | 968 | 0.5928 | 0.6775 | 0.5928 | 0.7700 |
| 0.5316 | 9.0654 | 970 | 0.6464 | 0.6931 | 0.6464 | 0.8040 |
| 0.5316 | 9.0841 | 972 | 0.6707 | 0.6918 | 0.6707 | 0.8190 |
| 0.5316 | 9.1028 | 974 | 0.6483 | 0.6962 | 0.6483 | 0.8052 |
| 0.5316 | 9.1215 | 976 | 0.6066 | 0.6895 | 0.6066 | 0.7788 |
| 0.5316 | 9.1402 | 978 | 0.5566 | 0.6778 | 0.5566 | 0.7461 |
| 0.5316 | 9.1589 | 980 | 0.5386 | 0.6726 | 0.5386 | 0.7339 |
| 0.5316 | 9.1776 | 982 | 0.5393 | 0.6778 | 0.5393 | 0.7344 |
| 0.5316 | 9.1963 | 984 | 0.5549 | 0.6789 | 0.5549 | 0.7449 |
| 0.5316 | 9.2150 | 986 | 0.5683 | 0.6912 | 0.5683 | 0.7539 |
| 0.5316 | 9.2336 | 988 | 0.5900 | 0.6917 | 0.5900 | 0.7681 |
| 0.5316 | 9.2523 | 990 | 0.5902 | 0.6941 | 0.5902 | 0.7682 |
| 0.5316 | 9.2710 | 992 | 0.5662 | 0.6813 | 0.5662 | 0.7525 |
| 0.5316 | 9.2897 | 994 | 0.5591 | 0.6719 | 0.5591 | 0.7478 |
| 0.5316 | 9.3084 | 996 | 0.5678 | 0.6816 | 0.5678 | 0.7535 |
| 0.5316 | 9.3271 | 998 | 0.5845 | 0.6856 | 0.5845 | 0.7645 |
| 0.1361 | 9.3458 | 1000 | 0.5850 | 0.6876 | 0.5850 | 0.7648 |
| 0.1361 | 9.3645 | 1002 | 0.5728 | 0.6771 | 0.5728 | 0.7569 |
| 0.1361 | 9.3832 | 1004 | 0.5544 | 0.6742 | 0.5544 | 0.7446 |
| 0.1361 | 9.4019 | 1006 | 0.5442 | 0.6650 | 0.5442 | 0.7377 |
| 0.1361 | 9.4206 | 1008 | 0.5498 | 0.6729 | 0.5498 | 0.7415 |
| 0.1361 | 9.4393 | 1010 | 0.5615 | 0.6768 | 0.5615 | 0.7493 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
JuniperChinenye/a1
|
JuniperChinenye
| 2024-11-05T21:34:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T21:31:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf
|
RichardErkhov
| 2024-11-05T21:26:31Z | 7 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-04T23:14:03Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16 - GGUF
- Model creator: https://huggingface.co/cloudyu/
- Original model: https://huggingface.co/cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q2_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q2_K.gguf) | Q2_K | 20.86GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K_S.gguf) | Q3_K_S | 24.51GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K.gguf) | Q3_K | 27.23GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K_M.gguf) | Q3_K_M | 27.23GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q3_K_L.gguf) | Q3_K_L | 29.59GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.IQ4_XS.gguf) | IQ4_XS | 30.58GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_0.gguf) | Q4_0 | 31.98GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.IQ4_NL.gguf) | IQ4_NL | 32.27GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_K_S.gguf) | Q4_K_S | 32.22GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_K.gguf) | Q4_K | 34.14GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_K_M.gguf) | Q4_K_M | 34.14GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/blob/main/TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q4_1.gguf) | Q4_1 | 35.49GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q5_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q5_0 | 39.0GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q5_K_S | 39.0GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q5_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q5_K | 40.12GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q5_K_M | 40.12GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q5_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q5_1 | 42.51GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q6_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q6_K | 46.47GB |
| [TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16.Q8_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16-gguf/tree/main/) | Q8_0 | 60.18GB |
Original model description:
---
tags:
- yi
- moe
license: apache-2.0
---
this is a DPO fine-tuned MoE model for [TomGrc/FusionNet_34Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE_v0.1)
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
Metrics
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg)
Metrics
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |77.91|
|AI2 Reasoning Challenge (25-Shot)|74.06|
|HellaSwag (10-Shot) |86.74|
|MMLU (5-Shot) |76.65|
|TruthfulQA (0-shot) |72.24|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |74.45|
|
UnaiGurbindo/Data_augmentation_model
|
UnaiGurbindo
| 2024-11-05T21:24:02Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-10-29T09:02:09Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-UnaiGurbindo/Data_augmentation_model
These are controlnet weights trained on stable-diffusion-v1-5/stable-diffusion-v1-5 with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
NikolayKozloff/Phi-3-mini-4k-instruct-sq-LORA-F32-GGUF
|
NikolayKozloff
| 2024-11-05T21:19:26Z | 27 | 1 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"sq",
"base_model:Kushtrim/Phi-3-mini-4k-instruct-sq",
"base_model:quantized:Kushtrim/Phi-3-mini-4k-instruct-sq",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T21:19:24Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- sq
library_name: transformers
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-lora
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Identifiko emrat e personave në këtë artikull 'Majlinda Kelmendi (lindi
më 9 maj 1991), është një xhudiste shqiptare nga Peja, Kosovë.'
base_model: Kushtrim/Phi-3-mini-4k-instruct-sq
---
# NikolayKozloff/Phi-3-mini-4k-instruct-sq-F32-GGUF
This LoRA adapter was converted to GGUF format from [`Kushtrim/Phi-3-mini-4k-instruct-sq`](https://huggingface.co/Kushtrim/Phi-3-mini-4k-instruct-sq) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Kushtrim/Phi-3-mini-4k-instruct-sq) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Phi-3-mini-4k-instruct-sq-f32.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Phi-3-mini-4k-instruct-sq-f32.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
NikolayKozloff/Phi-3-medium-4k-instruct-sq-LORA-F16-GGUF
|
NikolayKozloff
| 2024-11-05T21:15:56Z | 23 | 1 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"sq",
"base_model:Kushtrim/Phi-3-medium-4k-instruct-sq",
"base_model:quantized:Kushtrim/Phi-3-medium-4k-instruct-sq",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T21:15:53Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
language:
- sq
library_name: transformers
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-lora
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Identifiko emrat e personave në këtë artikull 'Majlinda Kelmendi (lindi
më 9 maj 1991), është një xhudiste shqiptare nga Peja, Kosovë.'
base_model: Kushtrim/Phi-3-medium-4k-instruct-sq
---
# NikolayKozloff/Phi-3-medium-4k-instruct-sq-F16-GGUF
This LoRA adapter was converted to GGUF format from [`Kushtrim/Phi-3-medium-4k-instruct-sq`](https://huggingface.co/Kushtrim/Phi-3-medium-4k-instruct-sq) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Kushtrim/Phi-3-medium-4k-instruct-sq) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Phi-3-medium-4k-instruct-sq-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Phi-3-medium-4k-instruct-sq-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
EndOfLe/fine_tuned_3e-5
|
EndOfLe
| 2024-11-05T21:08:47Z | 5 | 0 | null |
[
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"region:us"
] | null | 2024-11-05T21:07:27Z |
---
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine_tuned_3e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_3e-5
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0369
- Accuracy: 0.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4112 | 0.15 | 100 | 0.2334 | 0.96 |
| 0.3366 | 0.3 | 200 | 0.2366 | 0.96 |
| 0.2548 | 0.44 | 300 | 0.3344 | 0.9233 |
| 0.1728 | 0.59 | 400 | 0.5630 | 0.9017 |
| 0.1559 | 0.74 | 500 | 0.1761 | 0.9733 |
| 0.1139 | 0.89 | 600 | 0.9891 | 0.835 |
| 0.1084 | 1.04 | 700 | 0.1377 | 0.9733 |
| 0.0551 | 1.19 | 800 | 0.0782 | 0.9833 |
| 0.0829 | 1.33 | 900 | 0.0325 | 0.9933 |
| 0.0411 | 1.48 | 1000 | 0.0369 | 0.9933 |
| 0.0274 | 1.63 | 1100 | 0.0144 | 0.9983 |
| 0.0242 | 1.78 | 1200 | 0.0524 | 0.9933 |
| 0.0261 | 1.93 | 1300 | 0.1679 | 0.9817 |
| 0.0115 | 2.07 | 1400 | 0.0870 | 0.9883 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1+cu118
- Datasets 2.16.0
- Tokenizers 0.15.0
|
minchyeom/birthday-2
|
minchyeom
| 2024-11-05T21:07:38Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"en",
"base_model:minchyeom/birthday-llm",
"base_model:finetune:minchyeom/birthday-llm",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T19:54:44Z |
---
library_name: transformers
language:
- en
base_model:
- minchyeom/birthday-llm
---
It's my birthday!!
Use this system prompt:
```
Respond to each user instruction in an XML format, using <step> tags to document your logical reasoning process step-by-step, while the <output> tag should be reserved for your final communication with the user. Incorporate self-correction by reflecting on prior steps; if a previous thought requires adjustment, add a new <step> to refine your reasoning without altering the original. Include self-reflection by periodically assessing your thought process and noting any uncertainties or assumptions in separate <step> tags. Ensure that each step logically follows from the previous one, contributing to a coherent line of reasoning, and use the <output> to convey your final answer or conclusion to the user clearly.
```
|
tahayf/resnet-50_ferplus
|
tahayf
| 2024-11-05T21:03:52Z | 25 | 0 | null |
[
"safetensors",
"resnet",
"image-classification",
"ferplus",
"emotions",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"region:us"
] |
image-classification
| 2024-11-05T20:54:43Z |
---
base_model: microsoft/resnet-50
tags:
- image-classification
- ferplus
- emotions
---
# Fine-Tuned ResNet-50 on FERPlus Dataset
This model is a fine-tuned version of ResNet-50 on the [FERPlus dataset](https://www.kaggle.com/datasets/arnabkumarroy02/ferplus), which is more balanced ferplus dataset as owner claimed.
## Model Details
- **Base Model**: [Microsoft ResNet-50](https://huggingface.co/microsoft/resnet-50)
- **Dataset**: FERPlus, which contains grayscale images of faces labeled with emotion categories.
- **Task**: Emotion Classification
- **Labels**:
- 0: Angry
- 1: Contempt
- 2: Disgust
- 3: Fear
- 4: Happy
- 5: Neutral
- 6: Sad
- 7: Surprise
## Preprocessing Details
This model was fine-tuned on FERPlus dataset images resized to 224x224 pixels. Standard data augmentation techniques were applied, and normalization was performed with the following values:
- **Mean**: `[0.485, 0.456, 0.406]`
- **Standard Deviation**: `[0.229, 0.224, 0.225]`
### Training Hyperparameters
- **Batch Size**: 16
- **Epochs**: 10
- **Learning Rate**: 2e-5
- **Weight Decay**: 0.01
|
zaddyzaddy/gemma-zero-zero
|
zaddyzaddy
| 2024-11-05T20:57:17Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T20:53:51Z |
---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** zaddyzaddy
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceTB/SmolLM2-1.7B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dumoura/sd-class-noiseTonoise-128
|
Dumoura
| 2024-11-05T20:51:42Z | 48 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-11-05T20:50:30Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation noiseTonoise.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Dumoura/sd-class-noiseTonoise-128')
image = pipeline().images[0]
image
```
|
MangoHaha/qwen2-7b-instruct-amazon-description
|
MangoHaha
| 2024-11-05T20:47:13Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-11-05T08:26:08Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: peft
license: apache-2.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2-7b-instruct-amazon-description
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-7b-instruct-amazon-description
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.3
|
rizki-syazali/tapasid_finetuned_splitted_hitab_to_itqa
|
rizki-syazali
| 2024-11-05T20:45:08Z | 63 | 0 |
transformers
|
[
"transformers",
"safetensors",
"tapas",
"table-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2024-11-05T19:33:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/Yi-6B-Chat-8bit
|
mlx-community
| 2024-11-05T20:38:57Z | 9 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:01-ai/Yi-6B-Chat",
"base_model:quantized:01-ai/Yi-6B-Chat",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2024-11-05T20:36:32Z |
---
license: apache-2.0
widget:
- example_title: Yi-34B-Chat
text: hi
output:
text: ' Hello! How can I assist you today?'
- example_title: Yi-34B
text: There's a place where time stands still. A place of breath taking wonder,
but also
output:
text: ' an eerie sense that something is just not right…
Between the two worlds lies The Forgotten Kingdom - home to creatures long since
thought extinct and ancient magic so strong it defies belief! Only here can
you find what has been lost for centuries: An Elixir Of Life which will restore
youth and vitality if only those who seek its power are brave enough to face
up against all manner of dangers lurking in this mysterious land! But beware;
some say there may even exist powerful entities beyond our comprehension whose
intentions towards humanity remain unclear at best ---- they might want nothing
more than destruction itself rather then anything else from their quest after
immortality (and maybe someone should tell them about modern medicine)? In any
event though – one thing remains true regardless : whether or not success comes
easy depends entirely upon how much effort we put into conquering whatever challenges
lie ahead along with having faith deep down inside ourselves too ;) So let’s
get started now shall We?'
pipeline_tag: text-generation
tags:
- mlx
base_model: 01-ai/Yi-6B-Chat
---
# mlx-community/Yi-6B-Chat-8bit
The Model [mlx-community/Yi-6B-Chat-8bit](https://huggingface.co/mlx-community/Yi-6B-Chat-8bit) was converted to MLX format from [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Yi-6B-Chat-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF
|
mradermacher
| 2024-11-05T20:34:07Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/OmniBeagle-7B",
"flemmingmiguel/MBX-7B-v3",
"AiMavenAi/AiMaven-Prometheus",
"en",
"base_model:Kquant03/NeuralTrix-7B-dpo-laser",
"base_model:quantized:Kquant03/NeuralTrix-7B-dpo-laser",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T17:53:25Z |
---
base_model: Kquant03/NeuralTrix-7B-dpo-laser
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/OmniBeagle-7B
- flemmingmiguel/MBX-7B-v3
- AiMavenAi/AiMaven-Prometheus
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Kquant03/NeuralTrix-7B-dpo-laser
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralTrix-7B-dpo-laser-i1-GGUF/resolve/main/NeuralTrix-7B-dpo-laser.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
CohenQu/implicit_rank_200000
|
CohenQu
| 2024-11-05T20:32:44Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T20:09:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
exala/db_aca2_4.9
|
exala
| 2024-11-05T20:31:53Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T20:31:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Phi-3.5-Mounjaro-GGUF
|
mradermacher
| 2024-11-05T20:27:08Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:njprogrammer/Phi-3.5-Mounjaro",
"base_model:quantized:njprogrammer/Phi-3.5-Mounjaro",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T20:11:56Z |
---
base_model: njprogrammer/Phi-3.5-Mounjaro
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/njprogrammer/Phi-3.5-Mounjaro
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q4_0_4_4.gguf) | Q4_0_4_4 | 2.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mounjaro-GGUF/resolve/main/Phi-3.5-Mounjaro.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sentence-transformers/multi-qa-distilbert-dot-v1
|
sentence-transformers
| 2024-11-05T20:21:39Z | 1,632 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"openvino",
"distilbert",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# multi-qa-distilbert-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-distilbert-dot-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-distilbert-dot-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-distilbert-dot-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | No |
| Pooling-Method | CLS pooling |
| Suitable score functions | dot-product (e.g. `util.dot_score`) |
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using CLS-pooling, dot-product as similarity function, and a scale of 1.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** |
|
mradermacher/Python_Code_Generation_GPT2-GGUF
|
mradermacher
| 2024-11-05T20:20:08Z | 89 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:kaiest/Python_Code_Generation_GPT2",
"base_model:quantized:kaiest/Python_Code_Generation_GPT2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-05T20:18:08Z |
---
base_model: kaiest/Python_Code_Generation_GPT2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kaiest/Python_Code_Generation_GPT2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Python_Code_Generation_GPT2-GGUF/resolve/main/Python_Code_Generation_GPT2.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rizki-syazali/tapasid_finetuned_splitted_hitabid
|
rizki-syazali
| 2024-11-05T20:16:15Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"tapas",
"table-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2024-11-05T17:59:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gaodrew/deberta-pii-masking-augmented-test2
|
gaodrew
| 2024-11-05T20:15:40Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-05T19:40:09Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: deberta-pii-masking-augmented-test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-pii-masking-augmented-test2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0248
- Precision: 0.9565
- Recall: 0.9663
- F1: 0.9613
- Accuracy: 0.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5574 | 0.16 | 1000 | 0.0750 | 0.8633 | 0.9081 | 0.8851 | 0.9774 |
| 0.0572 | 0.32 | 2000 | 0.0455 | 0.9151 | 0.9290 | 0.9220 | 0.9857 |
| 0.0401 | 0.48 | 3000 | 0.0395 | 0.9294 | 0.9452 | 0.9372 | 0.9873 |
| 0.0319 | 0.64 | 4000 | 0.0301 | 0.9443 | 0.9548 | 0.9496 | 0.9902 |
| 0.0277 | 0.8 | 5000 | 0.0264 | 0.9503 | 0.9618 | 0.9560 | 0.9912 |
| 0.0231 | 0.96 | 6000 | 0.0249 | 0.9538 | 0.9652 | 0.9595 | 0.9920 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
beddi/fine_tuned_llama-BROKEN
|
beddi
| 2024-11-05T20:13:03Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T10:00:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alvarobrito/llama-3.1-fine-tuned
|
alvarobrito
| 2024-11-05T20:10:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T19:53:57Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alvarobrito
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Harshatheeswar/babylama-scratch
|
Harshatheeswar
| 2024-11-05T20:02:44Z | 26 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:babylm/babyllama-100m-2024",
"base_model:finetune:babylm/babyllama-100m-2024",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T15:36:50Z |
---
library_name: transformers
base_model: babylm/babyllama-100m-2024
tags:
- generated_from_trainer
model-index:
- name: babylama-scratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babylama-scratch
This model is a fine-tuned version of [babylm/babyllama-100m-2024](https://huggingface.co/babylm/babyllama-100m-2024) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 4.3038 | 0.9999 | 5559 | 4.3162 |
| 4.0495 | 1.9999 | 11119 | 4.0657 |
| 3.8894 | 3.0000 | 16679 | 3.9579 |
| 3.769 | 4.0 | 22239 | 3.9166 |
| 3.73 | 4.9993 | 27795 | 3.9140 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
JhonMR/RoBertaLex_v10
|
JhonMR
| 2024-11-05T19:55:55Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/RoBERTalex",
"base_model:finetune:PlanTL-GOB-ES/RoBERTalex",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T19:51:42Z |
---
library_name: transformers
license: apache-2.0
base_model: PlanTL-GOB-ES/RoBERTalex
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: RoBertaLex_v10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBertaLex_v10
This model is a fine-tuned version of [PlanTL-GOB-ES/RoBERTalex](https://huggingface.co/PlanTL-GOB-ES/RoBERTalex) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.8979
- F1: 0.8975
- Precision: 0.8983
- Recall: 0.8982
- Loss: 0.4829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
shoowadoo/bert-finetuned-ner
|
shoowadoo
| 2024-11-05T19:52:05Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-05T19:19:07Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9352327314891502
- name: Recall
type: recall
value: 0.9501851228542578
- name: F1
type: f1
value: 0.942649636864513
- name: Accuracy
type: accuracy
value: 0.985783246011656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0737
- Precision: 0.9352
- Recall: 0.9502
- F1: 0.9426
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0237 | 1.0 | 1756 | 0.0718 | 0.9189 | 0.9433 | 0.9309 | 0.9838 |
| 0.0205 | 2.0 | 3512 | 0.0802 | 0.9342 | 0.9458 | 0.9400 | 0.9849 |
| 0.0098 | 3.0 | 5268 | 0.0737 | 0.9352 | 0.9502 | 0.9426 | 0.9858 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
rizki-syazali/tapasid_finetuned_splitted_itqa
|
rizki-syazali
| 2024-11-05T19:50:01Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"tapas",
"table-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2024-11-05T19:39:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dirckvdende/bert-finetuned-ner
|
dirckvdende
| 2024-11-05T19:38:43Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-05T19:27:13Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3029
- Precision: 0.5757
- Recall: 0.7248
- F1: 0.6417
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 249 | 0.1572 | 0.5549 | 0.6675 | 0.6060 | 0.9591 |
| No log | 2.0 | 498 | 0.1741 | 0.6122 | 0.7235 | 0.6632 | 0.9611 |
| 0.1451 | 3.0 | 747 | 0.2083 | 0.5854 | 0.7173 | 0.6447 | 0.9588 |
| 0.1451 | 4.0 | 996 | 0.2085 | 0.5952 | 0.7049 | 0.6454 | 0.9606 |
| 0.0314 | 5.0 | 1245 | 0.2464 | 0.5998 | 0.7223 | 0.6554 | 0.9594 |
| 0.0314 | 6.0 | 1494 | 0.2773 | 0.5813 | 0.7123 | 0.6402 | 0.9578 |
| 0.0101 | 7.0 | 1743 | 0.2789 | 0.5782 | 0.7273 | 0.6442 | 0.9575 |
| 0.0101 | 8.0 | 1992 | 0.2984 | 0.5749 | 0.7310 | 0.6436 | 0.9576 |
| 0.0039 | 9.0 | 2241 | 0.2946 | 0.5801 | 0.7260 | 0.6449 | 0.9582 |
| 0.0039 | 10.0 | 2490 | 0.3029 | 0.5757 | 0.7248 | 0.6417 | 0.9577 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/ziya-llama-13b-medical-merged-GGUF
|
mradermacher
| 2024-11-05T19:33:36Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"chatglm",
"pytorch",
"Text-Generation",
"medical",
"zh",
"en",
"base_model:shibing624/ziya-llama-13b-medical-merged",
"base_model:quantized:shibing624/ziya-llama-13b-medical-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-04T17:40:51Z |
---
base_model: shibing624/ziya-llama-13b-medical-merged
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chatglm
- pytorch
- Text-Generation
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/shibing624/ziya-llama-13b-medical-merged
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ziya-llama-13b-medical-merged-GGUF/resolve/main/ziya-llama-13b-medical-merged.Q8_0.gguf) | Q8_0 | 14.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duyntnet/X-MythoChronos-13B-imatrix-GGUF
|
duyntnet
| 2024-11-05T19:24:30Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"X-MythoChronos-13B",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-11-05T15:05:39Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- X-MythoChronos-13B
---
Quantizations of https://huggingface.co/Undi95/X-MythoChronos-13B
### Inference Clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [ollama](https://github.com/ollama/ollama)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [GPT4All](https://github.com/nomic-ai/gpt4all)
* [jan](https://github.com/janhq/jan)
---
# From original readme
This repo contains fp16 files of X-MythoChronos-13B, a merge based around [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) and [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2).
Merge was done by choosing carefully the models, the loras, the weights of each of them, the order in which they are applied, and the order of the final models merging with the main goal of having a fresh RP experience.
<!-- description end -->
<!-- description start -->
## Models and loras used
- [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
- [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
- [Doctor-Shotgun/cat-v1.0-13b](https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b)
- [athirdpath/Eileithyia-13B](https://huggingface.co/athirdpath/Eileithyia-13B)
- [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
- [crestf411/crestfall-peft](https://huggingface.co/crestf411/crestfall-peft)
- [Undi95/Llama2-13B-no_robots-alpaca-lora](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora)
- [zattio770/120-Days-of-LORA-v2-13B](https://huggingface.co/zattio770/120-Days-of-LORA-v2-13B)
- [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
|
GeneralRincewind/AskDocLlama
|
GeneralRincewind
| 2024-11-05T19:24:25Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-05T19:20:24Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** GeneralRincewind
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lichang-Chen/llama3-8b-point60-100
|
Lichang-Chen
| 2024-11-05T19:23:09Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T19:17:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Marcoroni-neural-chat-7B-v2-GGUF
|
mradermacher
| 2024-11-05T19:13:51Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"en",
"base_model:Toten5/Marcoroni-neural-chat-7B-v2",
"base_model:quantized:Toten5/Marcoroni-neural-chat-7B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-04T11:07:40Z |
---
base_model: Toten5/Marcoroni-neural-chat-7B-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-neural-chat-7B-v2-GGUF/resolve/main/Marcoroni-neural-chat-7B-v2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
camidenecken/RoBERTa-RM1-v2-2-rm-v24
|
camidenecken
| 2024-11-05T19:09:26Z | 182 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T19:08:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v21
|
camidenecken
| 2024-11-05T19:02:17Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T19:01:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v18
|
camidenecken
| 2024-11-05T18:55:28Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:55:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF
|
mradermacher
| 2024-11-05T18:51:08Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:maywell/ko_wikidata_QA",
"base_model:wkshin89/Yi-Ko-6B-Instruct-v1.1",
"base_model:quantized:wkshin89/Yi-Ko-6B-Instruct-v1.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-05T15:22:34Z |
---
base_model: wkshin89/Yi-Ko-6B-Instruct-v1.1
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
- beomi/KoAlpaca-v1.1a
- maywell/ko_wikidata_QA
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/wkshin89/Yi-Ko-6B-Instruct-v1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 2.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 2.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 3.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 5.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF
|
mradermacher
| 2024-11-05T18:51:08Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:maywell/ko_wikidata_QA",
"base_model:wkshin89/Yi-Ko-6B-Instruct-v1.1",
"base_model:quantized:wkshin89/Yi-Ko-6B-Instruct-v1.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-04T14:27:27Z |
---
base_model: wkshin89/Yi-Ko-6B-Instruct-v1.1
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
- beomi/KoAlpaca-v1.1a
- maywell/ko_wikidata_QA
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wkshin89/Yi-Ko-6B-Instruct-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q2_K.gguf) | Q2_K | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q3_K_S.gguf) | Q3_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q3_K_M.gguf) | Q3_K_M | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q3_K_L.gguf) | Q3_K_L | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.IQ4_XS.gguf) | IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q4_K_S.gguf) | Q4_K_S | 3.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q4_K_M.gguf) | Q4_K_M | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q5_K_S.gguf) | Q5_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q5_K_M.gguf) | Q5_K_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q6_K.gguf) | Q6_K | 5.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-Ko-6B-Instruct-v1.1-GGUF/resolve/main/Yi-Ko-6B-Instruct-v1.1.Q8_0.gguf) | Q8_0 | 6.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sulaimank/w2v-bert-grain-lg_CV
|
sulaimank
| 2024-11-05T18:49:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-02T20:24:24Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: w2v-bert-grain-lg_cv_only_v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: lg
split: test[:10%]
args: lg
metrics:
- name: Wer
type: wer
value: 0.2319647170009451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-grain-lg_cv_only_v2
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6384
- Wer: 0.2320
- Cer: 0.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 0.3102 | 1.0 | 8884 | 0.4540 | 0.3644 | 0.1028 |
| 0.2032 | 2.0 | 17768 | 0.3881 | 0.3005 | 0.0845 |
| 0.1687 | 3.0 | 26652 | 0.4061 | 0.3139 | 0.0883 |
| 0.148 | 4.0 | 35536 | 0.4048 | 0.2879 | 0.0825 |
| 0.1327 | 5.0 | 44420 | 0.4136 | 0.2860 | 0.0831 |
| 0.1191 | 6.0 | 53304 | 0.3685 | 0.2889 | 0.0843 |
| 0.1087 | 7.0 | 62188 | 0.4108 | 0.2630 | 0.0810 |
| 0.0996 | 8.0 | 71072 | 0.3682 | 0.2628 | 0.0789 |
| 0.0918 | 9.0 | 79956 | 0.4126 | 0.2672 | 0.0779 |
| 0.0854 | 10.0 | 88840 | 0.3482 | 0.2628 | 0.0783 |
| 0.0778 | 11.0 | 97724 | 0.3948 | 0.2540 | 0.0773 |
| 0.0719 | 12.0 | 106608 | 0.3530 | 0.2477 | 0.0740 |
| 0.066 | 13.0 | 115492 | 0.4267 | 0.2604 | 0.0785 |
| 0.0595 | 14.0 | 124376 | 0.3779 | 0.2467 | 0.0727 |
| 0.0541 | 15.0 | 133260 | 0.4424 | 0.2622 | 0.0813 |
| 0.0485 | 16.0 | 142144 | 0.3848 | 0.2500 | 0.0755 |
| 0.044 | 17.0 | 151028 | 0.3752 | 0.2465 | 0.0736 |
| 0.0391 | 18.0 | 159912 | 0.3722 | 0.2524 | 0.0753 |
| 0.0347 | 19.0 | 168796 | 0.4386 | 0.2481 | 0.0762 |
| 0.0309 | 20.0 | 177680 | 0.4647 | 0.2552 | 0.0788 |
| 0.0273 | 21.0 | 186564 | 0.4453 | 0.2468 | 0.0736 |
| 0.0252 | 22.0 | 195448 | 0.4612 | 0.2450 | 0.0750 |
| 0.0229 | 23.0 | 204332 | 0.4624 | 0.2510 | 0.0750 |
| 0.0209 | 24.0 | 213216 | 0.4640 | 0.2535 | 0.0739 |
| 0.0186 | 25.0 | 222100 | 0.4309 | 0.2542 | 0.0747 |
| 0.0173 | 26.0 | 230984 | 0.4339 | 0.2490 | 0.0734 |
| 0.016 | 27.0 | 239868 | 0.4463 | 0.2477 | 0.0740 |
| 0.0143 | 28.0 | 248752 | 0.5788 | 0.2432 | 0.0784 |
| 0.0135 | 29.0 | 257636 | 0.4962 | 0.2482 | 0.0745 |
| 0.0124 | 30.0 | 266520 | 0.5620 | 0.2448 | 0.0794 |
| 0.0116 | 31.0 | 275404 | 0.5030 | 0.2419 | 0.0749 |
| 0.0108 | 32.0 | 284288 | 0.4731 | 0.2374 | 0.0729 |
| 0.0099 | 33.0 | 293172 | 0.4890 | 0.2425 | 0.0736 |
| 0.0095 | 34.0 | 302056 | 0.5449 | 0.2449 | 0.0783 |
| 0.0086 | 35.0 | 310940 | 0.5007 | 0.2355 | 0.0726 |
| 0.0082 | 36.0 | 319824 | 0.4715 | 0.2372 | 0.0738 |
| 0.0079 | 37.0 | 328708 | 0.5407 | 0.2430 | 0.0731 |
| 0.0072 | 38.0 | 337592 | 0.5361 | 0.2374 | 0.0738 |
| 0.0068 | 39.0 | 346476 | 0.5152 | 0.2459 | 0.0755 |
| 0.0063 | 40.0 | 355360 | 0.4737 | 0.2316 | 0.0715 |
| 0.0058 | 41.0 | 364244 | 0.5980 | 0.2391 | 0.0779 |
| 0.0052 | 42.0 | 373128 | 0.5633 | 0.2360 | 0.0727 |
| 0.0051 | 43.0 | 382012 | 0.5640 | 0.2352 | 0.0732 |
| 0.0046 | 44.0 | 390896 | 0.5674 | 0.2270 | 0.0710 |
| 0.0044 | 45.0 | 399780 | 0.5487 | 0.2352 | 0.0717 |
| 0.0042 | 46.0 | 408664 | 0.6279 | 0.2436 | 0.0786 |
| 0.0039 | 47.0 | 417548 | 0.6260 | 0.2438 | 0.0770 |
| 0.0038 | 48.0 | 426432 | 0.5995 | 0.2328 | 0.0763 |
| 0.0036 | 49.0 | 435316 | 0.6540 | 0.2403 | 0.0776 |
| 0.0031 | 50.0 | 444200 | 0.5347 | 0.2370 | 0.0747 |
| 0.0028 | 51.0 | 453084 | 0.6086 | 0.2490 | 0.0739 |
| 0.0026 | 52.0 | 461968 | 0.5515 | 0.2287 | 0.0693 |
| 0.0025 | 53.0 | 470852 | 0.6788 | 0.2414 | 0.0793 |
| 0.0023 | 54.0 | 479736 | 0.6384 | 0.2320 | 0.0721 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.1.0+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
camidenecken/RoBERTa-RM1-v2-2-rm-v16
|
camidenecken
| 2024-11-05T18:47:29Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:47:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dmelgar11/tuned
|
dmelgar11
| 2024-11-05T18:46:16Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:45:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v15
|
camidenecken
| 2024-11-05T18:45:17Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:44:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TechxGenus/Typst-Coder-1.5B
|
TechxGenus
| 2024-11-05T18:44:10Z | 122 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"base_model:01-ai/Yi-Coder-1.5B",
"base_model:finetune:01-ai/Yi-Coder-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T12:08:29Z |
---
tags:
- code
base_model:
- 01-ai/Yi-Coder-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# Typst-Coder
<p align="center">
<a href="https://huggingface.co/TechxGenus/Typst-Coder-1.5B">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/Typst-Coder">[🛠️Code]</a> |
<a href="https://huggingface.co/datasets/TechxGenus/Typst-Train">[📊Data]</a> |
</p>
<hr>
- [Typst-Coder](#typst-coder)
- [Introduction](#introduction)
- [Usage](#usage)
<hr>
## Introduction
While working with Typst documents, we noticed that AI programming assistants often generate poor results. I understand that these assistants may perform better in languages like Python and JavaScript, which benefit from more extensive datasets and feedback signals from executable code, unlike HTML or Markdown. However, current LLMs even frequently struggle to produce accurate Typst syntax, including models like GPT-4o and Claude-3.5-Sonnet.
Upon further investigation, we found that because Typst is a relatively new language, training data for it is scarce. GitHub's search tool doesn't categorize it as a language for code yet, and The Stack v1/v2 don’t include Typst. No open code LLMs currently list it as a supported language, either. To address this, we developed this project aimed at collecting relevant data and training models to improve Typst support in AI programming tools.
## Usage
An example script is shown below:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/Typst-Coder-1.5B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/Typst-Coder-1.5B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
```
|
TechxGenus/Typst-Coder-9B
|
TechxGenus
| 2024-11-05T18:43:38Z | 7 | 6 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"base_model:01-ai/Yi-Coder-9B",
"base_model:finetune:01-ai/Yi-Coder-9B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-03T14:48:21Z |
---
tags:
- code
base_model:
- 01-ai/Yi-Coder-9B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# Typst-Coder
<p align="center">
<a href="https://huggingface.co/TechxGenus/Typst-Coder-1.5B">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/Typst-Coder">[🛠️Code]</a> |
<a href="https://huggingface.co/datasets/TechxGenus/Typst-Train">[📊Data]</a> |
</p>
<hr>
- [Typst-Coder](#typst-coder)
- [Introduction](#introduction)
- [Usage](#usage)
<hr>
## Introduction
While working with Typst documents, we noticed that AI programming assistants often generate poor results. I understand that these assistants may perform better in languages like Python and JavaScript, which benefit from more extensive datasets and feedback signals from executable code, unlike HTML or Markdown. However, current LLMs even frequently struggle to produce accurate Typst syntax, including models like GPT-4o and Claude-3.5-Sonnet.
Upon further investigation, we found that because Typst is a relatively new language, training data for it is scarce. GitHub's search tool doesn't categorize it as a language for code yet, and The Stack v1/v2 don’t include Typst. No open code LLMs currently list it as a supported language, either. To address this, we developed this project aimed at collecting relevant data and training models to improve Typst support in AI programming tools.
## Usage
An example script is shown below:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/Typst-Coder-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/Typst-Coder-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
```
|
june9876/conflict_conversation_generator
|
june9876
| 2024-11-05T18:43:09Z | 130 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T16:53:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ASAP_FineTuningBERT_Aug_k1_task1_organization_fold3
|
MayBashendy
| 2024-11-05T18:42:21Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:11:03Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k1_task1_organization_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k1_task1_organization_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390
- Qwk: 0.6614
- Mse: 0.5390
- Rmse: 0.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0417 | 2 | 10.9663 | 0.0 | 10.9663 | 3.3115 |
| No log | 0.0833 | 4 | 9.4263 | 0.0 | 9.4263 | 3.0702 |
| No log | 0.125 | 6 | 7.8318 | 0.0107 | 7.8318 | 2.7985 |
| No log | 0.1667 | 8 | 6.2699 | 0.0023 | 6.2699 | 2.5040 |
| No log | 0.2083 | 10 | 4.7637 | 0.0 | 4.7637 | 2.1826 |
| No log | 0.25 | 12 | 3.4917 | 0.0297 | 3.4917 | 1.8686 |
| No log | 0.2917 | 14 | 2.4523 | 0.0147 | 2.4523 | 1.5660 |
| No log | 0.3333 | 16 | 1.7750 | 0.0086 | 1.7750 | 1.3323 |
| No log | 0.375 | 18 | 1.3127 | 0.0943 | 1.3127 | 1.1457 |
| No log | 0.4167 | 20 | 0.9924 | 0.0524 | 0.9924 | 0.9962 |
| No log | 0.4583 | 22 | 0.8882 | 0.0266 | 0.8882 | 0.9424 |
| No log | 0.5 | 24 | 0.8485 | 0.0102 | 0.8485 | 0.9211 |
| No log | 0.5417 | 26 | 0.8307 | 0.0671 | 0.8307 | 0.9114 |
| No log | 0.5833 | 28 | 0.8599 | 0.3386 | 0.8599 | 0.9273 |
| No log | 0.625 | 30 | 0.8520 | 0.1542 | 0.8520 | 0.9230 |
| No log | 0.6667 | 32 | 0.8203 | 0.0284 | 0.8203 | 0.9057 |
| No log | 0.7083 | 34 | 0.7727 | 0.0384 | 0.7727 | 0.8790 |
| No log | 0.75 | 36 | 0.6904 | 0.1335 | 0.6904 | 0.8309 |
| No log | 0.7917 | 38 | 0.7254 | 0.4371 | 0.7254 | 0.8517 |
| No log | 0.8333 | 40 | 0.6569 | 0.2662 | 0.6569 | 0.8105 |
| No log | 0.875 | 42 | 0.7216 | 0.1294 | 0.7216 | 0.8495 |
| No log | 0.9167 | 44 | 0.6092 | 0.2773 | 0.6092 | 0.7805 |
| No log | 0.9583 | 46 | 0.6772 | 0.4222 | 0.6772 | 0.8229 |
| No log | 1.0 | 48 | 0.6767 | 0.4602 | 0.6767 | 0.8226 |
| No log | 1.0417 | 50 | 0.5863 | 0.4008 | 0.5863 | 0.7657 |
| No log | 1.0833 | 52 | 0.6300 | 0.2867 | 0.6300 | 0.7937 |
| No log | 1.125 | 54 | 0.5294 | 0.4568 | 0.5294 | 0.7276 |
| No log | 1.1667 | 56 | 0.6231 | 0.5171 | 0.6231 | 0.7894 |
| No log | 1.2083 | 58 | 0.6013 | 0.5343 | 0.6013 | 0.7754 |
| No log | 1.25 | 60 | 0.4991 | 0.5050 | 0.4991 | 0.7064 |
| No log | 1.2917 | 62 | 0.4922 | 0.4875 | 0.4922 | 0.7016 |
| No log | 1.3333 | 64 | 0.5272 | 0.5453 | 0.5272 | 0.7261 |
| No log | 1.375 | 66 | 0.6860 | 0.5477 | 0.6860 | 0.8283 |
| No log | 1.4167 | 68 | 0.5690 | 0.5617 | 0.5690 | 0.7544 |
| No log | 1.4583 | 70 | 0.4874 | 0.5123 | 0.4874 | 0.6981 |
| No log | 1.5 | 72 | 0.5300 | 0.5974 | 0.5300 | 0.7280 |
| No log | 1.5417 | 74 | 0.5168 | 0.5891 | 0.5168 | 0.7189 |
| No log | 1.5833 | 76 | 0.4533 | 0.5359 | 0.4533 | 0.6733 |
| No log | 1.625 | 78 | 0.4715 | 0.5613 | 0.4715 | 0.6867 |
| No log | 1.6667 | 80 | 0.7300 | 0.5520 | 0.7300 | 0.8544 |
| No log | 1.7083 | 82 | 0.8602 | 0.4575 | 0.8602 | 0.9275 |
| No log | 1.75 | 84 | 0.6119 | 0.5794 | 0.6119 | 0.7822 |
| No log | 1.7917 | 86 | 0.4616 | 0.5208 | 0.4616 | 0.6794 |
| No log | 1.8333 | 88 | 0.4639 | 0.5281 | 0.4639 | 0.6811 |
| No log | 1.875 | 90 | 0.5099 | 0.5767 | 0.5099 | 0.7141 |
| No log | 1.9167 | 92 | 0.5564 | 0.5809 | 0.5564 | 0.7459 |
| No log | 1.9583 | 94 | 0.4995 | 0.5813 | 0.4995 | 0.7068 |
| No log | 2.0 | 96 | 0.4406 | 0.5330 | 0.4406 | 0.6638 |
| No log | 2.0417 | 98 | 0.4350 | 0.5189 | 0.4350 | 0.6596 |
| No log | 2.0833 | 100 | 0.4884 | 0.6066 | 0.4884 | 0.6989 |
| No log | 2.125 | 102 | 0.6504 | 0.6157 | 0.6504 | 0.8065 |
| No log | 2.1667 | 104 | 0.5399 | 0.6087 | 0.5399 | 0.7348 |
| No log | 2.2083 | 106 | 0.4432 | 0.5703 | 0.4432 | 0.6657 |
| No log | 2.25 | 108 | 0.4776 | 0.5874 | 0.4776 | 0.6911 |
| No log | 2.2917 | 110 | 0.6471 | 0.6155 | 0.6471 | 0.8044 |
| No log | 2.3333 | 112 | 0.5459 | 0.6019 | 0.5459 | 0.7388 |
| No log | 2.375 | 114 | 0.4922 | 0.6038 | 0.4922 | 0.7015 |
| No log | 2.4167 | 116 | 0.4859 | 0.6087 | 0.4859 | 0.6970 |
| No log | 2.4583 | 118 | 0.5212 | 0.6050 | 0.5212 | 0.7219 |
| No log | 2.5 | 120 | 0.5050 | 0.6037 | 0.5050 | 0.7107 |
| No log | 2.5417 | 122 | 0.4367 | 0.5864 | 0.4367 | 0.6608 |
| No log | 2.5833 | 124 | 0.4564 | 0.5931 | 0.4564 | 0.6756 |
| No log | 2.625 | 126 | 0.5612 | 0.6236 | 0.5612 | 0.7491 |
| No log | 2.6667 | 128 | 0.5660 | 0.6258 | 0.5660 | 0.7523 |
| No log | 2.7083 | 130 | 0.5733 | 0.6258 | 0.5733 | 0.7572 |
| No log | 2.75 | 132 | 0.4220 | 0.5608 | 0.4220 | 0.6496 |
| No log | 2.7917 | 134 | 0.4163 | 0.5664 | 0.4163 | 0.6452 |
| No log | 2.8333 | 136 | 0.4847 | 0.6089 | 0.4847 | 0.6962 |
| No log | 2.875 | 138 | 0.5050 | 0.6149 | 0.5050 | 0.7106 |
| No log | 2.9167 | 140 | 0.4212 | 0.5688 | 0.4212 | 0.6490 |
| No log | 2.9583 | 142 | 0.4243 | 0.5630 | 0.4243 | 0.6514 |
| No log | 3.0 | 144 | 0.4711 | 0.6060 | 0.4711 | 0.6864 |
| No log | 3.0417 | 146 | 0.4862 | 0.6030 | 0.4862 | 0.6973 |
| No log | 3.0833 | 148 | 0.4597 | 0.5956 | 0.4597 | 0.6780 |
| No log | 3.125 | 150 | 0.5160 | 0.6132 | 0.5160 | 0.7183 |
| No log | 3.1667 | 152 | 0.4906 | 0.6138 | 0.4906 | 0.7004 |
| No log | 3.2083 | 154 | 0.4656 | 0.5938 | 0.4656 | 0.6823 |
| No log | 3.25 | 156 | 0.5141 | 0.6151 | 0.5141 | 0.7170 |
| No log | 3.2917 | 158 | 0.5455 | 0.6226 | 0.5455 | 0.7386 |
| No log | 3.3333 | 160 | 0.4454 | 0.5889 | 0.4454 | 0.6673 |
| No log | 3.375 | 162 | 0.4323 | 0.5770 | 0.4323 | 0.6575 |
| No log | 3.4167 | 164 | 0.4546 | 0.6201 | 0.4546 | 0.6742 |
| No log | 3.4583 | 166 | 0.4639 | 0.6152 | 0.4639 | 0.6811 |
| No log | 3.5 | 168 | 0.4446 | 0.6008 | 0.4446 | 0.6668 |
| No log | 3.5417 | 170 | 0.4840 | 0.6321 | 0.4840 | 0.6957 |
| No log | 3.5833 | 172 | 0.4297 | 0.5647 | 0.4297 | 0.6555 |
| No log | 3.625 | 174 | 0.4354 | 0.5633 | 0.4354 | 0.6599 |
| No log | 3.6667 | 176 | 0.5103 | 0.5973 | 0.5103 | 0.7144 |
| No log | 3.7083 | 178 | 0.4955 | 0.6006 | 0.4955 | 0.7039 |
| No log | 3.75 | 180 | 0.4306 | 0.5592 | 0.4306 | 0.6562 |
| No log | 3.7917 | 182 | 0.4554 | 0.5867 | 0.4554 | 0.6748 |
| No log | 3.8333 | 184 | 0.6762 | 0.6240 | 0.6762 | 0.8223 |
| No log | 3.875 | 186 | 0.7220 | 0.6492 | 0.7220 | 0.8497 |
| No log | 3.9167 | 188 | 0.5116 | 0.6655 | 0.5116 | 0.7153 |
| No log | 3.9583 | 190 | 0.4229 | 0.5645 | 0.4229 | 0.6503 |
| No log | 4.0 | 192 | 0.4209 | 0.5723 | 0.4209 | 0.6488 |
| No log | 4.0417 | 194 | 0.4975 | 0.6697 | 0.4975 | 0.7053 |
| No log | 4.0833 | 196 | 0.5781 | 0.6848 | 0.5781 | 0.7603 |
| No log | 4.125 | 198 | 0.4780 | 0.6700 | 0.4780 | 0.6914 |
| No log | 4.1667 | 200 | 0.4190 | 0.5819 | 0.4190 | 0.6473 |
| No log | 4.2083 | 202 | 0.4450 | 0.6422 | 0.4450 | 0.6671 |
| No log | 4.25 | 204 | 0.5491 | 0.6498 | 0.5491 | 0.7410 |
| No log | 4.2917 | 206 | 0.5386 | 0.6348 | 0.5386 | 0.7339 |
| No log | 4.3333 | 208 | 0.4654 | 0.6148 | 0.4654 | 0.6822 |
| No log | 4.375 | 210 | 0.4679 | 0.6148 | 0.4679 | 0.6840 |
| No log | 4.4167 | 212 | 0.5931 | 0.6221 | 0.5931 | 0.7701 |
| No log | 4.4583 | 214 | 0.5860 | 0.6314 | 0.5860 | 0.7655 |
| No log | 4.5 | 216 | 0.4750 | 0.6139 | 0.4750 | 0.6892 |
| No log | 4.5417 | 218 | 0.4833 | 0.6165 | 0.4833 | 0.6952 |
| No log | 4.5833 | 220 | 0.5171 | 0.6415 | 0.5171 | 0.7191 |
| No log | 4.625 | 222 | 0.4802 | 0.6328 | 0.4802 | 0.6929 |
| No log | 4.6667 | 224 | 0.4484 | 0.6444 | 0.4484 | 0.6696 |
| No log | 4.7083 | 226 | 0.4427 | 0.6380 | 0.4427 | 0.6654 |
| No log | 4.75 | 228 | 0.5483 | 0.6700 | 0.5483 | 0.7405 |
| No log | 4.7917 | 230 | 0.5746 | 0.6688 | 0.5746 | 0.7580 |
| No log | 4.8333 | 232 | 0.4564 | 0.6306 | 0.4564 | 0.6756 |
| No log | 4.875 | 234 | 0.4311 | 0.6119 | 0.4311 | 0.6566 |
| No log | 4.9167 | 236 | 0.4561 | 0.6395 | 0.4561 | 0.6754 |
| No log | 4.9583 | 238 | 0.4414 | 0.6159 | 0.4414 | 0.6644 |
| No log | 5.0 | 240 | 0.4686 | 0.6470 | 0.4686 | 0.6845 |
| No log | 5.0417 | 242 | 0.5663 | 0.6871 | 0.5663 | 0.7525 |
| No log | 5.0833 | 244 | 0.6179 | 0.6762 | 0.6179 | 0.7861 |
| No log | 5.125 | 246 | 0.5064 | 0.6726 | 0.5064 | 0.7116 |
| No log | 5.1667 | 248 | 0.4723 | 0.6460 | 0.4723 | 0.6872 |
| No log | 5.2083 | 250 | 0.5523 | 0.6694 | 0.5523 | 0.7432 |
| No log | 5.25 | 252 | 0.6151 | 0.6813 | 0.6151 | 0.7843 |
| No log | 5.2917 | 254 | 0.5367 | 0.6497 | 0.5367 | 0.7326 |
| No log | 5.3333 | 256 | 0.5388 | 0.6411 | 0.5388 | 0.7341 |
| No log | 5.375 | 258 | 0.4877 | 0.6165 | 0.4877 | 0.6984 |
| No log | 5.4167 | 260 | 0.4693 | 0.6180 | 0.4693 | 0.6851 |
| No log | 5.4583 | 262 | 0.4477 | 0.5763 | 0.4477 | 0.6691 |
| No log | 5.5 | 264 | 0.4621 | 0.6193 | 0.4621 | 0.6798 |
| No log | 5.5417 | 266 | 0.6466 | 0.6612 | 0.6466 | 0.8041 |
| No log | 5.5833 | 268 | 0.7448 | 0.6600 | 0.7448 | 0.8630 |
| No log | 5.625 | 270 | 0.5929 | 0.6752 | 0.5929 | 0.7700 |
| No log | 5.6667 | 272 | 0.4553 | 0.6205 | 0.4553 | 0.6747 |
| No log | 5.7083 | 274 | 0.4414 | 0.5916 | 0.4414 | 0.6644 |
| No log | 5.75 | 276 | 0.4653 | 0.6376 | 0.4653 | 0.6821 |
| No log | 5.7917 | 278 | 0.5432 | 0.6733 | 0.5432 | 0.7370 |
| No log | 5.8333 | 280 | 0.4844 | 0.6550 | 0.4844 | 0.6960 |
| No log | 5.875 | 282 | 0.4612 | 0.6536 | 0.4612 | 0.6791 |
| No log | 5.9167 | 284 | 0.5200 | 0.6652 | 0.5200 | 0.7211 |
| No log | 5.9583 | 286 | 0.5399 | 0.6756 | 0.5399 | 0.7348 |
| No log | 6.0 | 288 | 0.4445 | 0.6544 | 0.4445 | 0.6667 |
| No log | 6.0417 | 290 | 0.4349 | 0.6428 | 0.4349 | 0.6595 |
| No log | 6.0833 | 292 | 0.4856 | 0.6623 | 0.4856 | 0.6969 |
| No log | 6.125 | 294 | 0.5212 | 0.6643 | 0.5212 | 0.7220 |
| No log | 6.1667 | 296 | 0.5556 | 0.6713 | 0.5556 | 0.7454 |
| No log | 6.2083 | 298 | 0.5579 | 0.6744 | 0.5579 | 0.7469 |
| No log | 6.25 | 300 | 0.5715 | 0.6729 | 0.5715 | 0.7560 |
| No log | 6.2917 | 302 | 0.5541 | 0.6535 | 0.5541 | 0.7444 |
| No log | 6.3333 | 304 | 0.4756 | 0.6248 | 0.4756 | 0.6896 |
| No log | 6.375 | 306 | 0.4754 | 0.6285 | 0.4754 | 0.6895 |
| No log | 6.4167 | 308 | 0.5538 | 0.6415 | 0.5538 | 0.7442 |
| No log | 6.4583 | 310 | 0.6286 | 0.6692 | 0.6286 | 0.7928 |
| No log | 6.5 | 312 | 0.5390 | 0.6652 | 0.5390 | 0.7341 |
| No log | 6.5417 | 314 | 0.4593 | 0.6257 | 0.4593 | 0.6777 |
| No log | 6.5833 | 316 | 0.4511 | 0.6171 | 0.4511 | 0.6716 |
| No log | 6.625 | 318 | 0.5005 | 0.6571 | 0.5005 | 0.7074 |
| No log | 6.6667 | 320 | 0.5767 | 0.6701 | 0.5767 | 0.7594 |
| No log | 6.7083 | 322 | 0.5198 | 0.6638 | 0.5198 | 0.7210 |
| No log | 6.75 | 324 | 0.4540 | 0.6264 | 0.4540 | 0.6738 |
| No log | 6.7917 | 326 | 0.4702 | 0.6484 | 0.4702 | 0.6857 |
| No log | 6.8333 | 328 | 0.5826 | 0.6861 | 0.5826 | 0.7633 |
| No log | 6.875 | 330 | 0.5976 | 0.6852 | 0.5976 | 0.7730 |
| No log | 6.9167 | 332 | 0.4934 | 0.6673 | 0.4934 | 0.7024 |
| No log | 6.9583 | 334 | 0.4595 | 0.6324 | 0.4595 | 0.6778 |
| No log | 7.0 | 336 | 0.4497 | 0.6225 | 0.4497 | 0.6706 |
| No log | 7.0417 | 338 | 0.4966 | 0.6641 | 0.4966 | 0.7047 |
| No log | 7.0833 | 340 | 0.5630 | 0.6798 | 0.5630 | 0.7503 |
| No log | 7.125 | 342 | 0.5192 | 0.6620 | 0.5192 | 0.7205 |
| No log | 7.1667 | 344 | 0.4929 | 0.6492 | 0.4929 | 0.7021 |
| No log | 7.2083 | 346 | 0.4977 | 0.6486 | 0.4977 | 0.7055 |
| No log | 7.25 | 348 | 0.5401 | 0.6591 | 0.5401 | 0.7349 |
| No log | 7.2917 | 350 | 0.5409 | 0.6573 | 0.5409 | 0.7355 |
| No log | 7.3333 | 352 | 0.5271 | 0.6597 | 0.5271 | 0.7260 |
| No log | 7.375 | 354 | 0.5172 | 0.6672 | 0.5172 | 0.7192 |
| No log | 7.4167 | 356 | 0.4892 | 0.6412 | 0.4892 | 0.6994 |
| No log | 7.4583 | 358 | 0.4835 | 0.6341 | 0.4835 | 0.6954 |
| No log | 7.5 | 360 | 0.5130 | 0.6539 | 0.5130 | 0.7162 |
| No log | 7.5417 | 362 | 0.4942 | 0.6515 | 0.4942 | 0.7030 |
| No log | 7.5833 | 364 | 0.5067 | 0.6479 | 0.5067 | 0.7118 |
| No log | 7.625 | 366 | 0.4850 | 0.6355 | 0.4850 | 0.6964 |
| No log | 7.6667 | 368 | 0.4828 | 0.6292 | 0.4828 | 0.6948 |
| No log | 7.7083 | 370 | 0.5140 | 0.6396 | 0.5140 | 0.7170 |
| No log | 7.75 | 372 | 0.4916 | 0.6441 | 0.4916 | 0.7011 |
| No log | 7.7917 | 374 | 0.5074 | 0.6517 | 0.5074 | 0.7123 |
| No log | 7.8333 | 376 | 0.4879 | 0.6469 | 0.4879 | 0.6985 |
| No log | 7.875 | 378 | 0.4975 | 0.6534 | 0.4975 | 0.7053 |
| No log | 7.9167 | 380 | 0.5277 | 0.6740 | 0.5277 | 0.7264 |
| No log | 7.9583 | 382 | 0.5237 | 0.6731 | 0.5237 | 0.7237 |
| No log | 8.0 | 384 | 0.4924 | 0.6387 | 0.4924 | 0.7017 |
| No log | 8.0417 | 386 | 0.5033 | 0.6508 | 0.5033 | 0.7094 |
| No log | 8.0833 | 388 | 0.5438 | 0.6672 | 0.5438 | 0.7374 |
| No log | 8.125 | 390 | 0.5496 | 0.6694 | 0.5496 | 0.7414 |
| No log | 8.1667 | 392 | 0.5193 | 0.6558 | 0.5193 | 0.7206 |
| No log | 8.2083 | 394 | 0.4974 | 0.6466 | 0.4974 | 0.7053 |
| No log | 8.25 | 396 | 0.5203 | 0.6598 | 0.5203 | 0.7213 |
| No log | 8.2917 | 398 | 0.5464 | 0.6725 | 0.5464 | 0.7392 |
| No log | 8.3333 | 400 | 0.6041 | 0.6647 | 0.6041 | 0.7773 |
| No log | 8.375 | 402 | 0.5957 | 0.6613 | 0.5957 | 0.7718 |
| No log | 8.4167 | 404 | 0.5261 | 0.6652 | 0.5261 | 0.7253 |
| No log | 8.4583 | 406 | 0.5031 | 0.6539 | 0.5031 | 0.7093 |
| No log | 8.5 | 408 | 0.5218 | 0.6700 | 0.5218 | 0.7224 |
| No log | 8.5417 | 410 | 0.5461 | 0.6720 | 0.5461 | 0.7390 |
| No log | 8.5833 | 412 | 0.5460 | 0.6680 | 0.5460 | 0.7389 |
| No log | 8.625 | 414 | 0.5369 | 0.6681 | 0.5369 | 0.7328 |
| No log | 8.6667 | 416 | 0.5298 | 0.6682 | 0.5298 | 0.7279 |
| No log | 8.7083 | 418 | 0.5049 | 0.6496 | 0.5049 | 0.7105 |
| No log | 8.75 | 420 | 0.5070 | 0.6496 | 0.5070 | 0.7121 |
| No log | 8.7917 | 422 | 0.5086 | 0.6488 | 0.5086 | 0.7132 |
| No log | 8.8333 | 424 | 0.5253 | 0.6690 | 0.5253 | 0.7248 |
| No log | 8.875 | 426 | 0.5510 | 0.6687 | 0.5510 | 0.7423 |
| No log | 8.9167 | 428 | 0.5398 | 0.6672 | 0.5398 | 0.7347 |
| No log | 8.9583 | 430 | 0.5225 | 0.6566 | 0.5225 | 0.7229 |
| No log | 9.0 | 432 | 0.5242 | 0.6608 | 0.5242 | 0.7240 |
| No log | 9.0417 | 434 | 0.5489 | 0.6668 | 0.5489 | 0.7409 |
| No log | 9.0833 | 436 | 0.5800 | 0.6581 | 0.5800 | 0.7616 |
| No log | 9.125 | 438 | 0.5713 | 0.6610 | 0.5713 | 0.7558 |
| No log | 9.1667 | 440 | 0.5635 | 0.6589 | 0.5635 | 0.7507 |
| No log | 9.2083 | 442 | 0.5509 | 0.6675 | 0.5509 | 0.7422 |
| No log | 9.25 | 444 | 0.5367 | 0.6614 | 0.5367 | 0.7326 |
| No log | 9.2917 | 446 | 0.5247 | 0.6556 | 0.5247 | 0.7243 |
| No log | 9.3333 | 448 | 0.5287 | 0.6602 | 0.5287 | 0.7271 |
| No log | 9.375 | 450 | 0.5260 | 0.6551 | 0.5260 | 0.7253 |
| No log | 9.4167 | 452 | 0.5370 | 0.6640 | 0.5370 | 0.7328 |
| No log | 9.4583 | 454 | 0.5412 | 0.6664 | 0.5412 | 0.7357 |
| No log | 9.5 | 456 | 0.5438 | 0.6636 | 0.5438 | 0.7374 |
| No log | 9.5417 | 458 | 0.5368 | 0.6650 | 0.5368 | 0.7327 |
| No log | 9.5833 | 460 | 0.5373 | 0.6650 | 0.5373 | 0.7330 |
| No log | 9.625 | 462 | 0.5335 | 0.6626 | 0.5335 | 0.7304 |
| No log | 9.6667 | 464 | 0.5301 | 0.6633 | 0.5301 | 0.7281 |
| No log | 9.7083 | 466 | 0.5316 | 0.6626 | 0.5316 | 0.7291 |
| No log | 9.75 | 468 | 0.5353 | 0.6632 | 0.5353 | 0.7316 |
| No log | 9.7917 | 470 | 0.5353 | 0.6657 | 0.5353 | 0.7316 |
| No log | 9.8333 | 472 | 0.5349 | 0.6657 | 0.5349 | 0.7313 |
| No log | 9.875 | 474 | 0.5385 | 0.6614 | 0.5385 | 0.7338 |
| No log | 9.9167 | 476 | 0.5398 | 0.6614 | 0.5398 | 0.7347 |
| No log | 9.9583 | 478 | 0.5393 | 0.6614 | 0.5393 | 0.7343 |
| No log | 10.0 | 480 | 0.5390 | 0.6614 | 0.5390 | 0.7342 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
camidenecken/RoBERTa-RM1-v2-2-rm-v12
|
camidenecken
| 2024-11-05T18:38:23Z | 183 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:37:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rirv938/multihead_example
|
rirv938
| 2024-11-05T18:36:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multihead_llama",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-11-05T18:31:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camidenecken/RoBERTa-RM1-v2-2-rm-v11
|
camidenecken
| 2024-11-05T18:36:07Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-05T18:35:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.