modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 06:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 06:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
armaniii/bert-base-uncased-augmentation-indomain-bm25-sts | armaniii | 2024-11-26T20:14:17Z | 10 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:20127",
"loss:CosineSimilarityLoss",
"en",
"dataset:sentence-transformers/stsb",
"arxiv:1908.10084",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-26T20:13:49Z | ---
base_model: google-bert/bert-base-uncased
datasets:
- sentence-transformers/stsb
language:
- en
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:20127
- loss:CosineSimilarityLoss
widget:
- source_sentence: The man talked to a girl over the internet camera.
sentences:
- A group of elderly people pose around a dining table.
- A teenager talks to a girl over a webcam.
- There is no 'still' that is not relative to some other object.
- source_sentence: A woman is writing something.
sentences:
- Two eagles are perched on a branch.
- It refers to the maximum f-stop (which is defined as the ratio of focal length
to effective aperture diameter).
- A woman is chopping green onions.
- source_sentence: The player shoots the winning points.
sentences:
- Minimum wage laws hurt the least skilled, least productive the most.
- The basketball player is about to score points for his team.
- Sheep are grazing in the field in front of a line of trees.
- source_sentence: Stars form in star-formation regions, which itself develop from
molecular clouds.
sentences:
- Although I believe Searle is mistaken, I don't think you have found the problem.
- It may be possible for a solar system like ours to exist outside of a galaxy.
- A blond-haired child performing on the trumpet in front of a house while his younger
brother watches.
- source_sentence: While Queen may refer to both Queen regent (sovereign) or Queen
consort, the King has always been the sovereign.
sentences:
- At first, I thought this is a bit of a tricky question.
- A man sitting on the floor in a room is strumming a guitar.
- There is a very good reason not to refer to the Queen's spouse as "King" - because
they aren't the King.
model-index:
- name: SentenceTransformer based on google-bert/bert-base-uncased
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8704036241540303
name: Pearson Cosine
- type: spearman_cosine
value: 0.8723063947160014
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8240304398880643
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8326280427400794
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.824332157368767
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8327621115149644
name: Spearman Euclidean
- type: pearson_dot
value: 0.7561120117358238
name: Pearson Dot
- type: spearman_dot
value: 0.7732899193523305
name: Spearman Dot
- type: pearson_max
value: 0.8704036241540303
name: Pearson Max
- type: spearman_max
value: 0.8723063947160014
name: Spearman Max
- type: pearson_cosine
value: 0.8341388917194029
name: Pearson Cosine
- type: spearman_cosine
value: 0.8312253997736475
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8121299512156789
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8102823785744042
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8124379587910084
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8106160221464417
name: Spearman Euclidean
- type: pearson_dot
value: 0.6947485972044003
name: Pearson Dot
- type: spearman_dot
value: 0.6858002756760537
name: Spearman Dot
- type: pearson_max
value: 0.8341388917194029
name: Pearson Max
- type: spearman_max
value: 0.8312253997736475
name: Spearman Max
---
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("armaniii/bert-base-uncased-augmentation-indomain-bm25-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man sitting on the floor in a room is strumming a guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8704 |
| **spearman_cosine** | **0.8723** |
| pearson_manhattan | 0.824 |
| spearman_manhattan | 0.8326 |
| pearson_euclidean | 0.8243 |
| spearman_euclidean | 0.8328 |
| pearson_dot | 0.7561 |
| spearman_dot | 0.7733 |
| pearson_max | 0.8704 |
| spearman_max | 0.8723 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8341 |
| **spearman_cosine** | **0.8312** |
| pearson_manhattan | 0.8121 |
| spearman_manhattan | 0.8103 |
| pearson_euclidean | 0.8124 |
| spearman_euclidean | 0.8106 |
| pearson_dot | 0.6947 |
| spearman_dot | 0.6858 |
| pearson_max | 0.8341 |
| spearman_max | 0.8312 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sentence-transformers/stsb
* Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
* Size: 20,127 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### sentence-transformers/stsb
* Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:------:|:------------------------:|
| 0.0795 | 100 | 0.0526 | 0.0390 | 0.8215 |
| 0.1590 | 200 | 0.0218 | 0.0335 | 0.8415 |
| 0.2385 | 300 | 0.0186 | 0.0310 | 0.8561 |
| 0.3180 | 400 | 0.0166 | 0.0341 | 0.8479 |
| 0.3975 | 500 | 0.0176 | 0.0313 | 0.8503 |
| 0.4769 | 600 | 0.0155 | 0.0281 | 0.8652 |
| 0.5564 | 700 | 0.0148 | 0.0270 | 0.8656 |
| 0.6359 | 800 | 0.014 | 0.0277 | 0.8669 |
| 0.7154 | 900 | 0.0149 | 0.0286 | 0.8694 |
| 0.7949 | 1000 | 0.0125 | 0.0281 | 0.8724 |
| 0.8744 | 1100 | 0.013 | 0.0285 | 0.8694 |
| 0.9539 | 1200 | 0.0127 | 0.0269 | 0.8723 |
| 1.0 | 1258 | - | - | 0.8312 |
### Framework Versions
- Python: 3.9.2
- Sentence Transformers: 3.0.1
- Transformers: 4.43.1
- PyTorch: 2.3.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.14.7
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
saintsauce/roberta-base_finetuned_model_lr_2e-05 | saintsauce | 2024-11-26T20:03:03Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T20:02:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF | bartowski | 2024-11-26T19:59:36Z | 72 | 0 | null | [
"gguf",
"text-generation",
"en",
"dataset:nvidia/ChatQA-Training-Data",
"base_model:DoeyLLM/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B",
"base_model:quantized:DoeyLLM/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-26T18:56:23Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
datasets:
- nvidia/ChatQA-Training-Data
base_model: DoeyLLM/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B
license: apache-2.0
language:
- en
---
## Llamacpp imatrix Quantizations of OneLLM-Doey-ChatQA-V1-Llama-3.2-1B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4132">b4132</a> for quantization.
Original model: https://huggingface.co/DoeyLLM/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Nov 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-f16.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-f16.gguf) | f16 | 2.48GB | false | Full F16 weights. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q8_0.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q8_0.gguf) | Q8_0 | 1.32GB | false | Extremely high quality, generally unneeded but max available quant. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q6_K_L.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q6_K_L.gguf) | Q6_K_L | 1.09GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q6_K.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q6_K.gguf) | Q6_K | 1.02GB | false | Very high quality, near perfect, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q5_K_L.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q5_K_L.gguf) | Q5_K_L | 0.98GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q5_K_M.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q5_K_M.gguf) | Q5_K_M | 0.91GB | false | High quality, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q5_K_S.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q5_K_S.gguf) | Q5_K_S | 0.89GB | false | High quality, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_L.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_L.gguf) | Q4_K_L | 0.87GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_M.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_M.gguf) | Q4_K_M | 0.81GB | false | Good quality, default size for most use cases, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q3_K_XL.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q3_K_XL.gguf) | Q3_K_XL | 0.80GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_S.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_S.gguf) | Q4_K_S | 0.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0_8_8.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0_8_8.gguf) | Q4_0_8_8 | 0.77GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0_4_8.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0_4_8.gguf) | Q4_0_4_8 | 0.77GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0_4_4.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0_4_4.gguf) | Q4_0_4_4 | 0.77GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_0.gguf) | Q4_0 | 0.77GB | false | Legacy format, generally not worth using over similarly sized formats |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-IQ4_XS.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-IQ4_XS.gguf) | IQ4_XS | 0.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q3_K_L.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q3_K_L.gguf) | Q3_K_L | 0.73GB | false | Lower quality but usable, good for low RAM availability. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-IQ3_M.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-IQ3_M.gguf) | IQ3_M | 0.66GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q2_K_L.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q2_K_L.gguf) | Q2_K_L | 0.64GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q2_K.gguf](https://huggingface.co/bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF/blob/main/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q2_K.gguf) | Q2_K | 0.58GB | false | Very low quality but surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF --include "OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-GGUF --include "OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (OneLLM-Doey-ChatQA-V1-Llama-3.2-1B-Q8_0) or download them all in place (./)
</details>
## Q4_0_X_X information
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mav23/Llama-2-7b-ft-instruct-es-GGUF | mav23 | 2024-11-26T19:57:46Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"es",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-11-26T19:11:47Z | ---
license: apache-2.0
language:
- es
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# Llama-2-7B-ft-instruct-es
[Llama 2 (7B)](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned on [Clibrain](https://huggingface.co/clibrain)'s Spanish instructions dataset.
## Model Details
Llama 2 is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pre-trained model. Links to other models can be found in the index at the bottom.
## Example of Usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "clibrain/Llama-2-7b-ft-instruct-es"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
def create_instruction(instruction, input_data=None, context=None):
sections = {
"Instrucción": instruction,
"Entrada": input_data,
"Contexto": context,
}
system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n"
prompt = system_prompt
for title, content in sections.items():
if content is not None:
prompt += f"### {title}:\n{content}\n\n"
prompt += "### Respuesta:\n"
return prompt
def generate(
instruction,
input=None,
context=None,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs
):
prompt = create_instruction(instruction, input, context)
print(prompt.replace("### Respuesta:\n", ""))
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Respuesta:")[1].lstrip("\n")
instruction = "Dame una lista de lugares a visitar en España."
print(generate(instruction))
```
## Example of Usage with `pipelines`
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "clibrain/Llama-2-7b-ft-instruct-es"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200, device=0)
prompt = """
A continuación hay una instrucción que describe una tarea. Escriba una respuesta que complete adecuadamente la solicitud.
### Instrucción:
Dame una lista de 5 lugares a visitar en España.
### Respuesta:
"""
result = pipe(prompt)
print(result[0]['generated_text'])
``` |
bartowski/Sparse-Llama-3.1-8B-2of4-GGUF | bartowski | 2024-11-26T19:55:13Z | 322 | 3 | null | [
"gguf",
"vllm",
"sparsity",
"text-generation",
"base_model:neuralmagic/Sparse-Llama-3.1-8B-2of4",
"base_model:quantized:neuralmagic/Sparse-Llama-3.1-8B-2of4",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T18:53:54Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: neuralmagic/Sparse-Llama-3.1-8B-2of4
tags:
- vllm
- sparsity
license: llama3.1
---
## Llamacpp imatrix Quantizations of Sparse-Llama-3.1-8B-2of4
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4132">b4132</a> for quantization.
Original model: https://huggingface.co/neuralmagic/Sparse-Llama-3.1-8B-2of4
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Sparse-Llama-3.1-8B-2of4-f16.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-f16.gguf) | f16 | 16.07GB | false | Full F16 weights. |
| [Sparse-Llama-3.1-8B-2of4-Q8_0.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q8_0.gguf) | Q8_0 | 8.54GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Sparse-Llama-3.1-8B-2of4-Q6_K_L.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q6_K_L.gguf) | Q6_K_L | 6.85GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q6_K.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q6_K.gguf) | Q6_K | 6.60GB | false | Very high quality, near perfect, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q5_K_L.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q5_K_L.gguf) | Q5_K_L | 6.06GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q5_K_M.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q5_K_M.gguf) | Q5_K_M | 5.73GB | false | High quality, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q5_K_S.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q5_K_S.gguf) | Q5_K_S | 5.60GB | false | High quality, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q4_K_L.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_K_L.gguf) | Q4_K_L | 5.31GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q4_K_M.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_K_M.gguf) | Q4_K_M | 4.92GB | false | Good quality, default size for most use cases, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q3_K_XL.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q3_K_XL.gguf) | Q3_K_XL | 4.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Sparse-Llama-3.1-8B-2of4-Q4_K_S.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_K_S.gguf) | Q4_K_S | 4.69GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q4_0.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_0.gguf) | Q4_0 | 4.68GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Sparse-Llama-3.1-8B-2of4-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.66GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [Sparse-Llama-3.1-8B-2of4-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.66GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [Sparse-Llama-3.1-8B-2of4-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.66GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [Sparse-Llama-3.1-8B-2of4-IQ4_XS.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-IQ4_XS.gguf) | IQ4_XS | 4.45GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Sparse-Llama-3.1-8B-2of4-Q3_K_L.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q3_K_L.gguf) | Q3_K_L | 4.32GB | false | Lower quality but usable, good for low RAM availability. |
| [Sparse-Llama-3.1-8B-2of4-Q3_K_M.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q3_K_M.gguf) | Q3_K_M | 4.02GB | false | Low quality. |
| [Sparse-Llama-3.1-8B-2of4-IQ3_M.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-IQ3_M.gguf) | IQ3_M | 3.78GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Sparse-Llama-3.1-8B-2of4-Q2_K_L.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q2_K_L.gguf) | Q2_K_L | 3.69GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Sparse-Llama-3.1-8B-2of4-Q3_K_S.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q3_K_S.gguf) | Q3_K_S | 3.66GB | false | Low quality, not recommended. |
| [Sparse-Llama-3.1-8B-2of4-IQ3_XS.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-IQ3_XS.gguf) | IQ3_XS | 3.52GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Sparse-Llama-3.1-8B-2of4-Q2_K.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-Q2_K.gguf) | Q2_K | 3.18GB | false | Very low quality but surprisingly usable. |
| [Sparse-Llama-3.1-8B-2of4-IQ2_M.gguf](https://huggingface.co/bartowski/Sparse-Llama-3.1-8B-2of4-GGUF/blob/main/Sparse-Llama-3.1-8B-2of4-IQ2_M.gguf) | IQ2_M | 2.95GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Sparse-Llama-3.1-8B-2of4-GGUF --include "Sparse-Llama-3.1-8B-2of4-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Sparse-Llama-3.1-8B-2of4-GGUF --include "Sparse-Llama-3.1-8B-2of4-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Sparse-Llama-3.1-8B-2of4-Q8_0) or download them all in place (./)
</details>
## Q4_0_X_X information
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mlx-community/Meta-Llama-3.1-70B-Instruct-8bit | mlx-community | 2024-11-26T19:48:55Z | 92 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"mlx",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-23T14:40:12Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
pipeline_tag: text-generation
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# mlx-community/Meta-Llama-3.1-70B-Instruct-8bit
The Model [mlx-community/Meta-Llama-3.1-70B-Instruct-8bit](https://huggingface.co/mlx-community/Meta-Llama-3.1-70B-Instruct-8bit) was converted to MLX format from [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) using mlx-lm version **0.16.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3.1-70B-Instruct-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mlx-community/Meta-Llama-3.1-8B-Instruct-8bit | mlx-community | 2024-11-26T19:46:03Z | 926 | 9 | mlx | [
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:llama3.1",
"region:us"
] | text-generation | 2024-07-23T14:39:39Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
pipeline_tag: text-generation
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# mlx-community/Meta-Llama-3.1-8B-Instruct-8bit
The Model [mlx-community/Meta-Llama-3.1-8B-Instruct-8bit](https://huggingface.co/mlx-community/Meta-Llama-3.1-8B-Instruct-8bit) was converted to MLX format from [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) using mlx-lm version **0.16.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3.1-8B-Instruct-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
TheDrummer/Behemoth-123B-v2.2 | TheDrummer | 2024-11-26T19:40:46Z | 179 | 23 | null | [
"safetensors",
"mistral",
"license:other",
"region:us"
] | null | 2024-11-24T06:10:09Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
*The finetune that made people buy another 3090...*
# Behemoth 123B v2.2 🦣 - Chaos Edition
> Nothing in the void is foreign to us. The place we go is the place we belong.

## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v2.2
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v2.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v2.2-GGUF (recommended for smaller quants)
## Description
Behemoth v2.x is a finetune of the new Largestral 2411 with system prompt support. Testers have noted that **everything** felt improved.
### Usage
Testers say this frankenformat maximizes the model's potential: **Metharme** with Mistral's new system tokens
- `[SYSTEM_PROMPT] <|system|>{{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
- `<|system|>[SYSTEM_PROMPT] {{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
*Take note that the opening system tag SHOULD ALWAYS have a leading whitespace after it.*
Complete SillyTavern Settings in BeaverAI Club: https://discord.com/channels/1238219753324281886/1309968730301792370/1309968730301792370
Mirror: https://rentry.org/cd32disa
### Versions
- [v2.0](https://huggingface.co/TheDrummer/Behemoth-123B-v2) is equivalent to Behemoth v1.0 (Classic)
- Claude-like creativity and prose
- Very familiar style
- Solid for all tasks
- [v2.1](https://huggingface.co/TheDrummer/Behemoth-123B-v2.1) is equivalent to Behemoth v1.1 (Creative Boost)
- Creative and lively
- Unique prose
- Balanced enough for RP and other tasks
- [v2.2](https://huggingface.co/TheDrummer/Behemoth-123B-v2.2) is a cranked up version of Behemoth v2.1 (Unhinged)
- Creatively unhinged
- Constantly unique prose
- May be too chaotic for strict RP, thrives in adventure / story
## Special Thanks
Thank you to each and everyone who donated/subscribed in [Ko-Fi](https://ko-fi.com/thedrummer) 🙇 I hope to never disappoint!
```
Toasty Pigeon
theguywhogamesalot
Grozi
F
Marinara
Ko-fi Supporter
Grozi
Phaelon
ONTHEREDTEAM
EvarinSharath'fe(USM-Valor)
Silva
Dakkidaze
AlexTheVP
Pseudo
Kistara
Dr. Fjut
Grozi 🥈
KinjiHakari777
dustywintr
Syd
HumbleConsumer
Syd
Ko-fi Supporter
Arkamist
joe 🥇
Toad
Lied
Konnect
Kistara
Grozi 🥉
SleepDeprived3
Luigi
Nestor
```
https://ko-fi.com/thedrummer/leaderboard
```
Finetuned by yours truly,
Drummer
```
Thank you Gargy for the GPUs!
 |
TheDrummer/Behemoth-123B-v2.2-GGUF | TheDrummer | 2024-11-26T19:40:37Z | 574 | 3 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-24T12:43:53Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
*The finetune that made people buy another 3090...*
# Behemoth 123B v2.2 🦣 - Chaos Edition
> Nothing in the void is foreign to us. The place we go is the place we belong.

## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v2.2
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v2.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v2.2-GGUF (recommended for smaller quants)
## Description
Behemoth v2.x is a finetune of the new Largestral 2411 with system prompt support. Testers have noted that **everything** felt improved.
### Usage
Testers say this frankenformat maximizes the model's potential: **Metharme** with Mistral's new system tokens
- `[SYSTEM_PROMPT] <|system|>{{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
- `<|system|>[SYSTEM_PROMPT] {{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
*Take note that the opening system tag SHOULD ALWAYS have a leading whitespace after it.*
Complete SillyTavern Settings in BeaverAI Club: https://discord.com/channels/1238219753324281886/1309968730301792370/1309968730301792370
Mirror: https://rentry.org/cd32disa
### Versions
- [v2.0](https://huggingface.co/TheDrummer/Behemoth-123B-v2) is equivalent to Behemoth v1.0 (Classic)
- Claude-like creativity and prose
- Very familiar style
- Solid for all tasks
- [v2.1](https://huggingface.co/TheDrummer/Behemoth-123B-v2.1) is equivalent to Behemoth v1.1 (Creative Boost)
- Creative and lively
- Unique prose
- Balanced enough for RP and other tasks
- [v2.2](https://huggingface.co/TheDrummer/Behemoth-123B-v2.2) is a cranked up version of Behemoth v2.1 (Unhinged)
- Creatively unhinged
- Constantly unique prose
- May be too chaotic for strict RP, thrives in adventure / story
## Special Thanks
Thank you to each and everyone who donated/subscribed in [Ko-Fi](https://ko-fi.com/thedrummer) 🙇 I hope to never disappoint!
```
Toasty Pigeon
theguywhogamesalot
Grozi
F
Marinara
Ko-fi Supporter
Grozi
Phaelon
ONTHEREDTEAM
EvarinSharath'fe(USM-Valor)
Silva
Dakkidaze
AlexTheVP
Pseudo
Kistara
Dr. Fjut
Grozi 🥈
KinjiHakari777
dustywintr
Syd
HumbleConsumer
Syd
Ko-fi Supporter
Arkamist
joe 🥇
Toad
Lied
Konnect
Kistara
Grozi 🥉
SleepDeprived3
Luigi
Nestor
```
https://ko-fi.com/thedrummer/leaderboard
```
Finetuned by yours truly,
Drummer
```
Thank you Gargy for the GPUs!
 |
ace-in-the-hole/7k-PhoContent-10304 | ace-in-the-hole | 2024-11-26T19:38:47Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T19:38:14Z | ---
library_name: transformers
license: agpl-3.0
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: 7k-PhoContent-10304
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 7k-PhoContent-10304
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2320
- Accuracy: 0.9419
- F1: 0.9150
- Precision: 0.9233
- Recall: 0.9074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.7443 | 2.6144 | 100 | 0.2005 | 0.9360 | 0.9056 | 0.9183 | 0.8945 |
| 0.348 | 5.2288 | 200 | 0.2175 | 0.9360 | 0.9024 | 0.9329 | 0.8792 |
| 0.348 | 7.8431 | 300 | 0.1926 | 0.9419 | 0.9128 | 0.9342 | 0.8952 |
| 0.1555 | 10.4575 | 400 | 0.2010 | 0.9457 | 0.9197 | 0.9344 | 0.9069 |
| 0.0984 | 13.0719 | 500 | 0.2211 | 0.9302 | 0.8967 | 0.9106 | 0.8846 |
| 0.0984 | 15.6863 | 600 | 0.2338 | 0.9322 | 0.8999 | 0.9124 | 0.8889 |
| 0.065 | 18.3007 | 700 | 0.2320 | 0.9419 | 0.9150 | 0.9233 | 0.9074 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct | Vikhrmodels | 2024-11-26T19:38:32Z | 4,278 | 12 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"arxiv:2405.13929",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-06T20:19:59Z | ---
library_name: transformers
model_name: Vikhr-Qwen-2.5-1.5B-Instruct
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
language:
- ru
- en
license: apache-2.0
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
---
# 💨🦅 Vikhr-Qwen-2.5-1.5B-Instruct
#### RU
Инструктивная модель на основе **Qwen-2.5-1.5B-Instruct**, обученная на русскоязычном датасете **GrandMaster-PRO-MAX**. Создана для высокоэффективной обработки текстов на русском и английском языках, обеспечивая точные ответы и быстрое выполнение задач.
#### EN
Instructive model based on **Qwen-2.5-1.5B-Instruct**, trained on the Russian-language dataset **GrandMaster-PRO-MAX**. Designed for high-efficiency text processing in Russian and English, delivering precise responses and fast task execution.
## Quatized variants:
- GGUF [Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF)
- MLX
- 4 bit [Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit)
- 8 bit [Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_8bit](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_8bit)
## Особенности:
- 📚 Основа / Base: [Qwen-2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
- 🇷🇺 Специализация / Specialization: **RU**
- 💾 Датасет / Dataset: [GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX)
- 🌍 Поддержка: **Bilingual RU/EN**
## Попробовать / Try now:
[](https://colab.research.google.com/drive/1bJpLmplDGkMbfOLO2CH6IO-2uUZEaknf?usp=sharing)
## Описание:
#### RU
**Vikhr-Qwen-2.5-1.5B-Instruct** — мощная языковая модель, обученная на датасете **GrandMaster-PRO-MAX**, поддерживает генерацию инструкций, контекстные ответы и анализ текста на русском языке. Эта модель оптимизирована для задач инструктивного обучения и обработки текстов. Она подходит для использования в профессиональной среде, а также для интеграции в пользовательские приложения и сервисы.
#### EN
**Vikhr-Qwen-2.5-1.5B-Instruct** is a robust language model trained on the **GrandMaster-PRO-MAX** dataset. It excels in instruction generation, contextual responses, and text analysis in Russian. The model is optimized for instructional tasks and textual data processing, suitable for professional use as well as integration into user-facing applications and services.
## Обучение / Training:
#### RU
**Vikhr-Qwen-2.5-1.5B-Instruct** была создана с использованием метода SFT (Supervised Fine-Tuning). Мы использовали синтетический датасет **GrandMaster-PRO-MAX** (150k инструкций), применяя подход CoT (Chain-Of-Thought) и промпты для GPT-4-turbo. Это позволило добиться высокой точности и когерентности ответов.
#### EN
**Vikhr-Qwen-2.5-1.5B-Instruct** was developed using the SFT (Supervised Fine-Tuning) method. The synthetic dataset **GrandMaster-PRO-MAX** (150k instructions) was used with CoT (Chain-Of-Thought) methodology and GPT-4-turbo prompts, enabling high accuracy and coherence in responses.
## Пример кода для запуска / Sample code to run:
**Рекомендуемая температура для генерации: 0.3** / **Recommended generation temperature: 0.3**.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."
messages = [
{"role": "system", "content": "Вы — Vikhr, ИИ помощник, созданный компанией Vikhr models для предоставления полезной, честной и безопасной информации."},
{"role": "user", "content": input_text},
]
# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
input_ids,
max_length=1512,
temperature=0.3,
num_return_sequences=1,
no_repeat_ngram_size=2,
top_k=50,
top_p=0.95,
)
# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
#### Ответ модели / Model response:
>Книга "Гарри Поттер" — это популярное произведение в жанре фэнтези, которое исследует темы дружбы, магии и борьбы со злом. Главный герой проходит путь взросления, преодолевая препятствия и сталкиваясь с моральными вызовами.
### Авторы / Authors
- Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), [Vikhr Team](https://t.me/vikhrlabs)
- Nikolay Kompanets, [LakoMoor](https://t.me/lakomoordev), [Vikhr Team](https://t.me/vikhrlabs)
- Konstantin Korolev, [Vikhr Team](https://t.me/vikhrlabs)
- Aleksandr Nikolich, [Vikhr Team](https://t.me/vikhrlabs)
```
@inproceedings{nikolich2024vikhr,
title={Vikhr: Advancing Open-Source Bilingual Instruction-Following Large Language Models for Russian and English},
author={Aleksandr Nikolich and Konstantin Korolev and Sergei Bratchikov and Nikolay Kompanets and Igor Kiselev and Artem Shelmanov},
booktitle={Proceedings of the 4th Workshop on Multilingual Representation Learning (MRL) @ EMNLP-2024},
year={2024},
publisher={Association for Computational Linguistics},
url={https://arxiv.org/pdf/2405.13929}
}
``` |
Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q5_K_S-GGUF | Triangle104 | 2024-11-26T19:35:01Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:knifeayumu/Cydonia-v1.3-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.3-Magnum-v4-22B",
"license:other",
"region:us",
"conversational"
] | null | 2024-11-26T19:33:20Z | ---
base_model: knifeayumu/Cydonia-v1.3-Magnum-v4-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q5_K_S-GGUF
This model was converted to GGUF format from [`knifeayumu/Cydonia-v1.3-Magnum-v4-22B`](https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B) for more details on the model.
---
Model details:
-
The Drummer becomes hornier (again)
Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B but uses TheDrummer/Cydonia-22B-v1.3 as the base. Yes, MortalWombat. I'm gonna use your parameters as long as I can!
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the SLERP merge method.
Models Merged
-
The following models were included in the merge:
TheDrummer/Cydonia-22B-v1.3
anthracite-org/magnum-v4-22b
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.3
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.3
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q5_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q5_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q5_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q5_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q5_k_s.gguf -c 2048
```
|
Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit | Vikhrmodels | 2024-11-26T19:31:45Z | 88 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"base_model:Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct",
"base_model:quantized:Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-11-26T19:28:11Z | ---
library_name: transformers
model_name: Vikhr-Qwen-2.5-1.5B-Instruct
base_model: Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct
language:
- ru
- en
license: apache-2.0
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
tags:
- mlx
---
# Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit
The Model [Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit) was
converted to MLX format from [Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct)
using mlx-lm version **0.20.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-MLX_4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
saintsauce/bert-base-uncased_finetuned_model_lr_5e-05 | saintsauce | 2024-11-26T19:29:28Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T19:29:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF | bartowski | 2024-11-26T19:23:23Z | 100 | 1 | null | [
"gguf",
"Llama",
"Llama-CPP",
"SmolTalk",
"ollama",
"bin",
"text-generation",
"en",
"dataset:HuggingFaceTB/smoltalk",
"base_model:prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct",
"base_model:quantized:prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-26T18:50:58Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
datasets:
- HuggingFaceTB/smoltalk
base_model: prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct
tags:
- Llama
- Llama-CPP
- SmolTalk
- ollama
- bin
license: creativeml-openrail-m
language:
- en
---
## Llamacpp imatrix Quantizations of Llama-SmolTalk-3.2-1B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4132">b4132</a> for quantization.
Original model: https://huggingface.co/prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 July 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Llama-SmolTalk-3.2-1B-Instruct-f16.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-f16.gguf) | f16 | 2.48GB | false | Full F16 weights. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q8_0.gguf) | Q8_0 | 1.32GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q6_K_L.gguf) | Q6_K_L | 1.09GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q6_K.gguf) | Q6_K | 1.02GB | false | Very high quality, near perfect, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q5_K_L.gguf) | Q5_K_L | 0.98GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.91GB | false | High quality, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.89GB | false | High quality, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_K_L.gguf) | Q4_K_L | 0.87GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.81GB | false | Good quality, default size for most use cases, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 0.80GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 0.77GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 0.77GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 0.77GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q4_0.gguf) | Q4_0 | 0.77GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Llama-SmolTalk-3.2-1B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-IQ4_XS.gguf) | IQ4_XS | 0.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.73GB | false | Lower quality but usable, good for low RAM availability. |
| [Llama-SmolTalk-3.2-1B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-IQ3_M.gguf) | IQ3_M | 0.66GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q2_K_L.gguf) | Q2_K_L | 0.64GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Llama-SmolTalk-3.2-1B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF/blob/main/Llama-SmolTalk-3.2-1B-Instruct-Q2_K.gguf) | Q2_K | 0.58GB | false | Very low quality but surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF --include "Llama-SmolTalk-3.2-1B-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-SmolTalk-3.2-1B-Instruct-GGUF --include "Llama-SmolTalk-3.2-1B-Instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Llama-SmolTalk-3.2-1B-Instruct-Q8_0) or download them all in place (./)
</details>
## Q4_0_X_X information
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct-GGUF | Vikhrmodels | 2024-11-26T19:21:19Z | 327 | 4 | llama.cpp | [
"llama.cpp",
"gguf",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"base_model:Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct",
"base_model:quantized:Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-26T17:04:23Z | ---
model_name: Vikhr-Qwen-2.5-1.5B-Instruct-GGUF
base_model:
- Vikhrmodels/Vikhr-Qwen-2.5-1.5b-Instruct
library_name: llama.cpp
language:
- ru
- en
license: apache-2.0
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
---
# 💨🦅 Vikhr-Qwen-2.5-1.5B-Instruct
#### RU
Инструктивная модель на основе **Qwen-2.5-1.5B-Instruct**, обученная на русскоязычном датасете **GrandMaster-PRO-MAX**. Создана для высокоэффективной обработки текстов на русском и английском языках, обеспечивая точные ответы и быстрое выполнение задач.
#### EN
Instructive model based on **Qwen-2.5-1.5B-Instruct**, trained on the Russian-language dataset **GrandMaster-PRO-MAX**. Designed for high-efficiency text processing in Russian and English, delivering precise responses and fast task execution.
## Transformers
- [Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct) |
psktoure/BERT_WordLevel_phoneme_wikitext | psktoure | 2024-11-26T19:15:03Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-25T00:01:04Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BERT_WordLevel_phoneme_wikitext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_WordLevel_phoneme_wikitext
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 106000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:------:|:---------------:|
| 3.0796 | 0.9394 | 2000 | 3.0683 |
| 2.1293 | 1.8788 | 4000 | 1.8796 |
| 1.0751 | 2.8182 | 6000 | 0.9697 |
| 0.8026 | 3.7576 | 8000 | 0.7028 |
| 0.6745 | 4.6970 | 10000 | 0.5962 |
| 0.5973 | 5.6364 | 12000 | 0.5183 |
| 0.5651 | 6.5759 | 14000 | 0.4832 |
| 0.5151 | 7.5153 | 16000 | 0.4465 |
| 0.488 | 8.4547 | 18000 | 0.4234 |
| 0.4706 | 9.3941 | 20000 | 0.4077 |
| 0.4528 | 10.3335 | 22000 | 0.3910 |
| 0.4375 | 11.2729 | 24000 | 0.3734 |
| 0.4273 | 12.2123 | 26000 | 0.3705 |
| 0.4171 | 13.1517 | 28000 | 0.3640 |
| 0.4078 | 14.0911 | 30000 | 0.3485 |
| 0.4007 | 15.0305 | 32000 | 0.3467 |
| 0.39 | 15.9699 | 34000 | 0.3367 |
| 0.3824 | 16.9093 | 36000 | 0.3300 |
| 0.3765 | 17.8488 | 38000 | 0.3277 |
| 0.372 | 18.7882 | 40000 | 0.3230 |
| 0.3682 | 19.7276 | 42000 | 0.3186 |
| 0.3638 | 20.6670 | 44000 | 0.3160 |
| 0.357 | 21.6064 | 46000 | 0.3101 |
| 0.3544 | 22.5458 | 48000 | 0.3007 |
| 0.3495 | 23.4852 | 50000 | 0.3037 |
| 0.3456 | 24.4246 | 52000 | 0.2982 |
| 0.3416 | 25.3640 | 54000 | 0.2902 |
| 0.3382 | 26.3034 | 56000 | 0.2908 |
| 0.3337 | 27.2428 | 58000 | 0.2866 |
| 0.3323 | 28.1822 | 60000 | 0.2920 |
| 0.3284 | 29.1217 | 62000 | 0.2820 |
| 0.3265 | 30.0611 | 64000 | 0.2772 |
| 0.3249 | 31.0005 | 66000 | 0.2791 |
| 0.3214 | 31.9399 | 68000 | 0.2804 |
| 0.3192 | 32.8793 | 70000 | 0.2742 |
| 0.3151 | 33.8187 | 72000 | 0.2707 |
| 0.3154 | 34.7581 | 74000 | 0.2684 |
| 0.3126 | 35.6975 | 76000 | 0.2699 |
| 0.3116 | 36.6369 | 78000 | 0.2736 |
| 0.3087 | 37.5763 | 80000 | 0.2693 |
| 0.3074 | 38.5157 | 82000 | 0.2652 |
| 0.3044 | 39.4551 | 84000 | 0.2548 |
| 0.3044 | 40.3946 | 86000 | 0.2649 |
| 0.3035 | 41.3340 | 88000 | 0.2650 |
| 0.2994 | 42.2734 | 90000 | 0.2519 |
| 0.3014 | 43.2128 | 92000 | 0.2564 |
| 0.2971 | 44.1522 | 94000 | 0.2602 |
| 0.2983 | 45.0916 | 96000 | 0.2552 |
| 0.297 | 46.0310 | 98000 | 0.2556 |
| 0.296 | 46.9704 | 100000 | 0.2553 |
| 0.2936 | 47.9098 | 102000 | 0.2568 |
| 0.2929 | 48.8492 | 104000 | 0.2480 |
| 0.2935 | 49.7886 | 106000 | 0.2557 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-GGUF | Triangle104 | 2024-11-26T19:14:56Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:knifeayumu/Cydonia-v1.3-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.3-Magnum-v4-22B",
"license:other",
"region:us",
"conversational"
] | null | 2024-11-26T19:12:34Z | ---
base_model: knifeayumu/Cydonia-v1.3-Magnum-v4-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-GGUF
This model was converted to GGUF format from [`knifeayumu/Cydonia-v1.3-Magnum-v4-22B`](https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/knifeayumu/Cydonia-v1.3-Magnum-v4-22B) for more details on the model.
---
Model details:
-
The Drummer becomes hornier (again)
Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B but uses TheDrummer/Cydonia-22B-v1.3 as the base. Yes, MortalWombat. I'm gonna use your parameters as long as I can!
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the SLERP merge method.
Models Merged
-
The following models were included in the merge:
TheDrummer/Cydonia-22B-v1.3
anthracite-org/magnum-v4-22b
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.3
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.3
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.3-magnum-v4-22b-q4_k_s.gguf -c 2048
```
|
KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-8.0bpw-exl2 | KnutJaegersberg | 2024-11-26T19:10:41Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"arxiv:2410.08800",
"arxiv:2410.03730",
"arxiv:2410.08928",
"base_model:openGPT-X/Teuken-7B-base-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-base-v0.4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-11-26T18:16:32Z | ---
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
library_name: transformers
base_model:
- openGPT-X/Teuken-7B-base-v0.4
license: apache-2.0
---
# Model Card for Teuken-7B-instruct-commercial-v0.4
[Teuken-7B-instruct-commercial-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4) is an instruction-tuned 7B parameter multilingual large language model (LLM) pre-trained with 4T tokens in all official 24 European languages and released under Apache 2.0 in the research project [OpenGPT-X](https://opengpt-x.de).
The base model Teuken-7B-base-v0.4 is available on request 📧 <a href="[email protected]">[email protected]</a>.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Fraunhofer, Forschungszentrum Jülich, TU Dresden, DFKI
- **Funded by:** German Federal Ministry of Economics and Climate Protection (BMWK) in the context of the OpenGPT-X project
- **Model type:** Transformer based decoder-only model
- **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
- **Shared by:** OpenGPT-X
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
[Teuken-7B-instruct-commercial-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4) is intended for commercial and research use in all official 24 European languages. Since [Teuken-7B-instruct-commercial-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4) focuses on covering all 24 EU languages, it renders more stable results across these languages and better reflects European values in its answers than English-centric models. It is therefore specialized for use in multilingual tasks.
## Disclaimer Toxic Content:
This Large Language Model (LLM) may generate content that is inappropriate, offensive, or harmful. While the dataset has been heavily filtered to minimize such outputs, the model may still produce text that is biased or toxic due to the large scale and diverse nature of the data.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model is not intended for use in math and coding tasks.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[Teuken-7B-instruct-commercial-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4) is an instruction-tuned version of Teuken-7B-base-v0.4 (which is available on request 📧 <a href="[email protected]">[email protected]</a>) that is not completely free from biases and hallucinations.
## How to Get Started with the Model
## Usage
The model requires transformers, sentencepiece, and the torch library.
After installation, here's an example of how to use the model:
As this model is a fine-tuned model, it must be used with the provided prompt template. Using the model without the prompt template is not intended and is not recommended. The prompt template is defined as follows:
```python
user="Hi!"
lang_code = "DE"
system_messages={
"EN": "A chat between a human and an artificial intelligence assistant."
" The assistant gives helpful and polite answers to the human's questions.",
"DE": "Ein Gespräch zwischen einem Menschen und einem Assistenten mit künstlicher Intelligenz."
" Der Assistent gibt hilfreiche und höfliche Antworten auf die Fragen des Menschen.",
}
prompt = f"System: {system_messages[lang_code]}\nUser: {user}\nAssistant:"
```
The prompt template is also directly integrated in the Tokenizer and can be used as follows:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = "openGPT-X/Teuken-7B-instruct-commercial-v0.4"
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model = model.to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
messages = [{"role": "User", "content": "Wer bist du?"}]
prompt_ids = tokenizer.apply_chat_template(messages, chat_template="DE", tokenize=True, add_generation_prompt=True, return_tensors="pt")
prediction = model.generate(
prompt_ids.to(model.device),
max_length=512,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
num_return_sequences=1,
)
prediction_text = tokenizer.decode(prediction[0].tolist())
print(prediction_text)
```
This example demonstrates how to load the model and tokenizer, prepare input, generate text, and print the result.
## Training Details
### Pre-Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Teuken-7B-base-v0.4 was pre-trained on 4 trillion tokens of data from publicly available sources.
The pretraining data has a cutoff of September 2023.
More information is available in our preprint ["Data Processing for the OpenGPT-X Model Family"](http://arxiv.org/abs/2410.08800).
### Instruction-Tuning Data
The model was fine-tuned on a collection of English- and German-focused instruction-tuning datasets which also contains instructions for 22 official European languages
The dataset composition contains three types of data: multilingual data, English data, and translated German data
#### English data
* We only included a subsample of the OpenOrca dataset.
* To select instruction-tuning examples based on their quality, We calculated the reward scores of all English examples utilizing [Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) (Apache-2.0 license)
We aim to include roughly the same amount of English examples as we have multilingual examples:
1. Add all multi-turn examples
2. Add entire `code_alpaca` dataset subset
4. For the remaining dataset subsets (`open_orca`, `evol_instruct_143k`, `evol_instruct_70k`, `sharegpt_v3`, `ultrachat_200k`), we add the samples with the highest reward scores so that each dataset subset contributes an equal amount of high-quality examples
##### German Data
As we aim for a German- and English-centric, European language dataset and due to the sparsity of large-scale German instruction-tuning data, we translated the English portion of the above-described dataset composition. For this, we applied the [Alma-13B](https://huggingface.co/haoranxu/ALMA-13B) (MIT license) model. As code can be a problematic case for translation, we implemented a regex-based code detection functionality. With it, we exclude code snippets from translation and insert the code snippets after translation again.
As the `alpaca_code` contains many code snippets not detectable by our regex-based code detection implementation, we included this part of the dataset from the translation.
#### Multilingual data
For multilingual data we include the 14 offical European languages contained in the [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) and the 21 offical European languages contained in the `translated_flan_cot` dataset of the [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection/viewer/translated_flan_cot).
#### Datasets and Licenses
| Name | Language | License |
| :--------------------------------------------------------------------------------------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------- |
| [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) | EN | MIT |
| [sahil2801/CodeAlpaca-20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) | EN | CC-BY-4.0 |
| [WizardLM/WizardLM_evol_instruct_V2_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) | EN | MIT |
| [WizardLM/WizardLM_evol_instruct_70k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_70k) | EN | MIT |
| [anon8231489123/ShareGPT_Vicuna_unfiltered](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) | EN | Apache-2.0 |
| [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | EN | MIT |
| [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Multilingual | Apache-2.0 |
| [CohereForAI/aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Multilingual | Apache-2.0 |
| [FreedomIntelligence/sharegpt-deutsch](https://huggingface.co/datasets/FreedomIntelligence/sharegpt-deutsch) | DE | Apache-2.0 |
| [bjoernp/ultrachat_de](https://huggingface.co/datasets/bjoernp/ultrachat_de) | DE | MIT |
Dataset contribution per language:
| | total | de_freedomintelligence_sharegpt | de_ultrachat_de | translated_flan_cot | aya_dataset | ultrachat_200k_translated_to_de | sharegpt_v3_unfiltered_translated_to_de | evol_instruct_143k_translated_to_de | evol_instruct_70k_translated_to_de | open_orca_translated_to_de | ultrachat_200k | sharegpt_v3_unfiltered | code_alpaca | open_orca | evol_instruct_143k | evol_instruct_70k |
|:---|--------:|----------------------------------:|------------------:|----------------------:|--------------:|----------------------------------:|------------------------------------------:|--------------------------------------:|-------------------------------------:|-----------------------------:|-----------------:|-------------------------:|--------------:|------------:|---------------------:|--------------------:|
| BG | 1909 | 0 | 0 | 1909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| CS | 1885 | 0 | 0 | 1885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| DA | 2001 | 0 | 0 | 1906 | 95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| DE | 77628 | 5818 | 898 | 1896 | 231 | 6940 | 37555 | 8116 | 8065 | 8109 | 0 | 0 | 0 | 0 | 0 | 0 |
| ET | 1901 | 0 | 0 | 1901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| EL | 2472 | 0 | 0 | 1881 | 591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| ES | 3800 | 0 | 0 | 1898 | 1902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| EN | 80806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6915 | 37600 | 12013 | 8074 | 8099 | 8105 |
| FI | 2598 | 0 | 0 | 1890 | 708 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| FR | 3250 | 0 | 0 | 1890 | 1360 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| HU | 1985 | 0 | 0 | 1892 | 93 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| MT | 1918 | 0 | 0 | 1918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| IT | 2613 | 0 | 0 | 1910 | 703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| LT | 2800 | 0 | 0 | 1920 | 880 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| NL | 3549 | 0 | 0 | 1905 | 1644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| PL | 3322 | 0 | 0 | 1909 | 1413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| PT | 3806 | 0 | 0 | 1897 | 1909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| RO | 1888 | 0 | 0 | 1888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| GA | 3069 | 0 | 0 | 1880 | 1189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| SK | 1922 | 0 | 0 | 1922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| SL | 1894 | 0 | 0 | 1894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| SV | 3160 | 0 | 0 | 1916 | 1244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Total across languages 210,176
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Instruction fined tuned version of [Teuken-7B-base-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-base-v0.4).
More information regarding the pre-training are available in our model preprint ["Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs"](https://arxiv.org/abs/2410.03730).
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, , bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Results on multilingual benchmarks for 21 European languages with instruction-tuned models
| Model | Avg. | EU21-ARC | EU21-HeSw | EU21-TQA | EU21-MMLU |
|--------------------------------|--------|----------|-----------|----------|-----------|
| Meta-Llama-3.1-8B-Instruct | **.563** | .563 | .579 | .532 | **.576** |
| Mistral-7B-Instruct-v0.3 | .527 | .530 | .538 | **.548** | .491 |
| Salamandra-7B-Instruct | .543 | **.595** | **.637** | .482 | .459 |
| Aya-23-8B | .485 | .475 | .535 | .476 | .455 |
| Occiglot-7B-eu5-Instruct | .475 | .484 | .519 | .471 | .428 |
| Pharia-1-LLM-7B-C-A | .417 | .396 | .438 | .469 | .366 |
| Bloomz-7B1 | .358 | .316 | .354 | .461 | .302 |
| **Teuken-7B-instruct-commercial-v0.4** | .531 | .569 | .620 | .503 | .430 |
More information regarding the quality of our translated benchmarks are available in our Evaluation preprint ["Towards Multilingual LLM Evaluation for European Languages"](https://arxiv.org/abs/2410.08928).
More evaluation results regarding Teuken-7B-instruct-research-v0.4 are available in our model preprint ["Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs"](https://arxiv.org/abs/2410.03730).
The model was evaluated in 21 languages on ARC, GSM8K, HellaSwag, TruthfulQA, Translation and MMLU. Results can also be seen in the [European LLM Leaderboard](https://huggingface.co/spaces/openGPT-X/european-llm-leaderboard).
## Technical Specifications
### Model Architecture and Objective
| Hyper-Parameter | Value |
|----------------------------|----------|
| Training Objective | CLM |
| Activation Function | SwiGLU |
| Seq Length | 4096 |
| Position Embeddings | Rotary |
| Num Layers | 32 |
| Hidden Size | 4096 |
| FFN Hidden Size | 13440 |
| Num Attention Heads | 32 |
| Head Dim | 128 |
| Group Query Attention | yes |
| Num Query Groups | 2 |
| Normalization | RMSNorm |
| Learning rate | 3e-4 |
| Min learning rate | 3e-5 |
| Disable bias in linear | yes |
| Hidden dropout | 0.0 |
| Attention dropout | 0.0 |
| Optimizer | AdamW |
| Beta1 | 0.9 |
| Beta2 | 0.95 |
| Data-type | bf16 |
| Recompute-activations | yes |
| Distributed-optimizers | yes |
### Compute Infrastructure
We trained our models on JUWELS Booster which consists of 936 compute nodes, each equipped with 4 NVIDIA A100 GPUs. The GPUs are hosted by AMD EPYC Rome CPUs. The compute nodes are connected with HDR-200 InfiniBand in a DragonFly+ topology.
#### Hardware
The configuration of JUWELS Booster compute nodes is the following:
CPU: AMD EPYC 7402 processor; 2 sockets, 24 cores per socket, SMT-2 (total: 2×24×2 = 96 threads) in NPS-4 1 configuration
Memory: 512 GB DDR4-3200 RAM (of which at least 20 GB is taken by the system software stack, including the file system); 256 GB per socket; 8 memory channels per socket (2 channels per NUMA domain)
GPU: 4 × NVIDIA A100 Tensor Core GPU with 40 GB; connected via NVLink3 to each other
Network: 4 × Mellanox HDR200 InfiniBand ConnectX 6 (200 Gbit/s each), HCA
Periphery: CPU, GPU, and network adapter are connected via 2 PCIe Gen 4 switches with 16 PCIe lanes going to each device (CPU socket: 2×16 lanes). PCIe switches are configured in synthetic mode.
#### Software
[Megatron-LM](https://github.com/OpenGPTX/Megatron-LM)
**BibTeX:**
If you find our model useful in your research, please consider citing our [preprint](https://arxiv.org/abs/2410.03730):
```
@misc{ali2024teuken7bbaseteuken7binstructeuropean,
title={Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs},
author={Mehdi Ali and Michael Fromm and Klaudia Thellmann and Jan Ebert and Alexander Arno Weber and Richard Rutmann and Charvi Jain and Max Lübbering and Daniel Steinigen and Johannes Leveling and Katrin Klug and Jasper Schulze Buschhoff and Lena Jurkschat and Hammam Abdelwahab and Benny Jörg Stein and Karl-Heinz Sylla and Pavel Denisov and Nicolo' Brandizzi and Qasid Saleem and Anirban Bhowmick and Lennard Helmer and Chelsea John and Pedro Ortiz Suarez and Malte Ostendorff and Alex Jude and Lalith Manjunath and Samuel Weinbach and Carolin Penke and Oleg Filatov and Shima Asaadi and Fabio Barth and Rafet Sifa and Fabian Küch and Andreas Herten and René Jäkel and Georg Rehm and Stefan Kesselheim and Joachim Köhler and Nicolas Flores-Herr},
year={2024},
eprint={2410.03730},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.03730},
}
```
# Team
## Data Team
Anirban Bhowmick (IAIS), Nicolo Brandizzi (IAIS), Lennard Helmer (IAIS), Benny Jörg Stein (IAIS), Karl-Heinz Sylla (IAIS), Pavel Denisov (IAIS), Qasid Saleem (IAIS), Johannes Leveling (IAIS), Hammam Abdelwahab (IAIS), Luzian Hahn (IIS), Farzad Naderi (IIS), Md Saiful Islam (IIS), Alexander Schwirjow (IIS), Pedro Ortiz Suarez (ex. DFKI), Malte Ostendorff (ex. DFKI)
## Model-Training Team
### Core contributors
Mehdi Ali (IAIS), Michael Fromm (IAIS), Jan Ebert (FZJ), Chelsea John (FZJ), Lena Jurkschat (TUD), Alexander Weber (IAIS)
### Contributors:
Richard Rutmann (IAIS), Daniel Steinigen (IAIS), Lalith Manjunath (TUD), Carolin Penke (FZJ)
## Evaluation Team
### Core contributors
Klaudia Thellmann (TUD), Alex Jude (IAIS), Jasper Buschhoff (IAIS)
### Contributors:
Shima Assadi (IIS), Fabio Barth (DFKI)
## Management
Joachim Köhler (IAIS), Nicolas Flores-Herr (IAIS), Stefan Kesselheim (FZJ), Andreas Herten (FZJ), Georg Rehm (DFKI), René Jäkel (TUD), Fabian Küch (IIS), Nicole Hildebrandt (IAIS), Ines Wendler (IAIS)
We believe that collaboration is key to overcome the aforementioned limitations and thereby strengthening the European GenAI landscape. Because of this, the team invites researchers, developers, and AI enthusiasts to join and engage through various platforms. A Discord server has been created for community collaboration, offering a space for discussions on technical details, ideas, and direct interaction with developers. Additionally, resources like research publications and a European LLM Leaderboard provide insights into Teuken-7B’s performance and technical aspects. The OpenGPT-X team encourages ongoing engagement and collaboration as the project evolves.
Key links:
Discord: OpenGPT-X [Discord server](https://discord.com/invite/RvdHpGMvB3)
Research Papers: OpenGPT-X News [Research Papers](https://opengpt-x.de/en/news-en/)
LLM Leaderboard: European LLM Leaderboard [LLM Leaderboard](https://huggingface.co/spaces/openGPT-X/european-llm-leaderboard)
<div class="hf-card">
<h2>Contact Information</h2>
<p>You can reach out to the following model card contact:</p>
<ul>
<li>
<a href="https://huggingface.co/openGPT-X" target="_blank">OpenGPT-X</a>
- <a href="[email protected]">[email protected]</a>
</li>
</ul>
</div> |
bartowski/Teuken-7B-instruct-research-v0.4-GGUF | bartowski | 2024-11-26T19:10:36Z | 697 | 0 | null | [
"gguf",
"text-generation",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"base_model:openGPT-X/Teuken-7B-instruct-research-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-instruct-research-v0.4",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T18:43:31Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: openGPT-X/Teuken-7B-instruct-research-v0.4
metrics:
- accuracy
- bleu
license: other
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
---
## Llamacpp imatrix Quantizations of Teuken-7B-instruct-research-v0.4
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4132">b4132</a> for quantization.
Original model: https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Teuken-7B-instruct-research-v0.4-f16.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-f16.gguf) | f16 | 14.91GB | false | Full F16 weights. |
| [Teuken-7B-instruct-research-v0.4-Q8_0.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q8_0.gguf) | Q8_0 | 7.93GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Teuken-7B-instruct-research-v0.4-Q6_K_L.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q6_K_L.gguf) | Q6_K_L | 6.80GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q6_K.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q6_K.gguf) | Q6_K | 6.55GB | false | Very high quality, near perfect, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q5_K_L.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q5_K_L.gguf) | Q5_K_L | 5.90GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q5_K_M.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q5_K_M.gguf) | Q5_K_M | 5.65GB | false | High quality, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q5_K_S.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q5_K_S.gguf) | Q5_K_S | 5.38GB | false | High quality, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q4_K_L.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_K_L.gguf) | Q4_K_L | 5.27GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q4_K_M.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_K_M.gguf) | Q4_K_M | 5.02GB | false | Good quality, default size for most use cases, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q4_K_S.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_K_S.gguf) | Q4_K_S | 4.70GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q3_K_XL.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Teuken-7B-instruct-research-v0.4-Q4_0.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_0.gguf) | Q4_0 | 4.48GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Teuken-7B-instruct-research-v0.4-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.46GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [Teuken-7B-instruct-research-v0.4-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.46GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [Teuken-7B-instruct-research-v0.4-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.46GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [Teuken-7B-instruct-research-v0.4-IQ4_XS.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-IQ4_XS.gguf) | IQ4_XS | 4.32GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Teuken-7B-instruct-research-v0.4-Q3_K_L.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q3_K_L.gguf) | Q3_K_L | 4.32GB | false | Lower quality but usable, good for low RAM availability. |
| [Teuken-7B-instruct-research-v0.4-Q3_K_M.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q3_K_M.gguf) | Q3_K_M | 4.15GB | false | Low quality. |
| [Teuken-7B-instruct-research-v0.4-IQ3_M.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-IQ3_M.gguf) | IQ3_M | 3.95GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Teuken-7B-instruct-research-v0.4-Q3_K_S.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q3_K_S.gguf) | Q3_K_S | 3.84GB | false | Low quality, not recommended. |
| [Teuken-7B-instruct-research-v0.4-IQ3_XS.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-IQ3_XS.gguf) | IQ3_XS | 3.70GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Teuken-7B-instruct-research-v0.4-Q2_K_L.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q2_K_L.gguf) | Q2_K_L | 3.68GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Teuken-7B-instruct-research-v0.4-Q2_K.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-Q2_K.gguf) | Q2_K | 3.43GB | false | Very low quality but surprisingly usable. |
| [Teuken-7B-instruct-research-v0.4-IQ2_M.gguf](https://huggingface.co/bartowski/Teuken-7B-instruct-research-v0.4-GGUF/blob/main/Teuken-7B-instruct-research-v0.4-IQ2_M.gguf) | IQ2_M | 3.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Teuken-7B-instruct-research-v0.4-GGUF --include "Teuken-7B-instruct-research-v0.4-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Teuken-7B-instruct-research-v0.4-GGUF --include "Teuken-7B-instruct-research-v0.4-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Teuken-7B-instruct-research-v0.4-Q8_0) or download them all in place (./)
</details>
## Q4_0_X_X information
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Chirayu/nl2pandas | Chirayu | 2024-11-26T19:10:05Z | 280 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"code",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-09T22:41:18Z | ---
license: mit
tags:
- code
---
# What does this model do?
This model converts the natural language input to pandas query. It is a fine-tuned CodeT5+ 220M. This model is a part of nl2query repository which is present at https://github.com/Chirayu-Tripathi/nl2query
You can use this model via the github repository or via following code. More information can be found on the repository.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("Chirayu/nl2pandas")
tokenizer = AutoTokenizer.from_pretrained("Chirayu/nl2pandas")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
textual_query = '''pandas: which cabinet has average age less than 21? | titanic : passengerid, survived, pclass, name, sex, age, sibsp, parch, ticket, fare, cabin, embarked'''
def generate_query(
textual_query: str,
num_beams: int = 10,
max_length: int = 128,
repetition_penalty: int = 2.5,
length_penalty: int = 1,
early_stopping: bool = True,
top_p: int = 0.95,
top_k: int = 50,
num_return_sequences: int = 1,
) -> str:
input_ids = tokenizer.encode(
textual_query, return_tensors="pt", add_special_tokens=True
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_ids = input_ids.to(device)
generated_ids = model.generate(
input_ids=input_ids,
num_beams=num_beams,
max_length=max_length,
repetition_penalty=repetition_penalty,
length_penalty=length_penalty,
early_stopping=early_stopping,
top_p=top_p,
top_k=top_k,
num_return_sequences=num_return_sequences,
)
query = [
tokenizer.decode(
generated_id,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
for generated_id in generated_ids
][0]
return query
```
|
Sparzyo/vg | Sparzyo | 2024-11-26T19:02:34Z | 31 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-26T19:02:30Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: vg-style
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# vg
<Gallery />
## Model description
## Trigger words
You should use `vg-style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Sparzyo/vg/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
neuralmagic/Qwen2.5-32B-quantized.w8a16 | neuralmagic | 2024-11-26T18:42:15Z | 14 | 0 | null | [
"safetensors",
"qwen2",
"int8",
"vllm",
"llm-compressor",
"text-generation",
"conversational",
"en",
"arxiv:2210.17323",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | text-generation | 2024-10-09T18:29:15Z | ---
tags:
- int8
- vllm
- llm-compressor
language:
- en
pipeline_tag: text-generation
license: apache-2.0
base_model:
- Qwen/Qwen2.5-32B
---
# Qwen2.5-32B-quantized.w8a16
## Model Overview
- **Model Architecture:** Qwen2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT8
- **Intended Use Cases:** Similarly to [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B), this is a base language model.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 10/09/2024
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B).
It achieves an OpenLLMv1 score of 75.4, compared to 75.3 for [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) to INT8 data type.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights of the linear operators within transformers blocks are quantized.
Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Qwen2.5-32B-quantized.w8a16"
number_gpus = 1
max_model_len = 8192
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Evaluation
The model was evaluated on the OpenLLMv1 benchmark, composed of MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
### Accuracy
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Qwen2.5-32B</strong>
</td>
<td><strong>Qwen2.5-32B-quantized.w8a16<br>(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td rowspan="8" ><strong>OpenLLM v1</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>83.25
</td>
<td>83.19
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>66.30
</td>
<td>66.04
</td>
<td>99.6%
</td>
</tr>
<tr>
<td>GSM-8k (5-shot, strict-match)
</td>
<td>78.09
</td>
<td>78.62
</td>
<td>100.7%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>85.08
</td>
<td>85.14
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>81.29
</td>
<td>81.61
</td>
<td>100.4%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>57.76
</td>
<td>57.78
</td>
<td>100.0%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>75.30</strong>
</td>
<td><strong>75.40</strong>
</td>
<td><strong>100.1%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Qwen2.5-32B-quantized.w8a16",dtype=auto,max_model_len=4096,add_bos_token=True,tensor_parallel_size=1 \
--tasks openllm \
--batch_size auto
```
|
friendshipkim/Llama-3.2-1B-pruned-h0.5-i0.5-a0.0 | friendshipkim | 2024-11-26T18:35:23Z | 168 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-05T02:45:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
neuralmagic/Qwen2.5-3B-quantized.w8a16 | neuralmagic | 2024-11-26T18:35:11Z | 10 | 0 | null | [
"safetensors",
"qwen2",
"int8",
"vllm",
"llm-compressor",
"text-generation",
"conversational",
"en",
"arxiv:2210.17323",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | text-generation | 2024-10-09T17:42:48Z | ---
tags:
- int8
- vllm
- llm-compressor
language:
- en
pipeline_tag: text-generation
license: apache-2.0
base_model:
- Qwen/Qwen2.5-3B
---
# Qwen2.5-3B-quantized.w8a16
## Model Overview
- **Model Architecture:** Qwen2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT8
- **Intended Use Cases:** Similarly to [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), this is a base language model.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 10/09/2024
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It achieves an OpenLLMv1 score of 63.8, compared to 63.6 for [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) to INT8 data type.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights of the linear operators within transformers blocks are quantized.
Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Qwen2.5-3B-quantized.w8a16"
number_gpus = 1
max_model_len = 8192
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Evaluation
The model was evaluated on the OpenLLMv1 benchmark, composed of MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
### Accuracy
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Qwen2.5-3B</strong>
</td>
<td><strong>Qwen2.5-3B-quantized.w8a16<br>(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td rowspan="8" ><strong>OpenLLM v1</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>65.68
</td>
<td>65.65
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>53.58
</td>
<td>53.07
</td>
<td>99.0%
</td>
</tr>
<tr>
<td>GSM-8k (5-shot, strict-match)
</td>
<td>68.23
</td>
<td>70.05
</td>
<td>102.7%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>51.83
</td>
<td>51.78
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>70.64
</td>
<td>70.56
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>49.93
</td>
<td>48.88
</td>
<td>99.9%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>63.59</strong>
</td>
<td><strong>63.78</strong>
</td>
<td><strong>100.3%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Qwen2.5-3B-quantized.w8a16",dtype=auto,max_model_len=4096,add_bos_token=True,tensor_parallel_size=1 \
--tasks openllm \
--batch_size auto
```
|
saintsauce/bert-base-uncased_finetuned_model_lr_2e-05 | saintsauce | 2024-11-26T18:34:08Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T18:33:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeremierostan/shakespeare-llama | jeremierostan | 2024-11-26T18:33:06Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T15:01:06Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: shakespeare-llama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shakespeare-llama
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Tokenizers 0.20.3
|
neuralmagic/Qwen2.5-0.5B-quantized.w8a16 | neuralmagic | 2024-11-26T18:32:07Z | 19 | 0 | null | [
"safetensors",
"qwen2",
"int8",
"vllm",
"llm-compressor",
"text-generation",
"conversational",
"en",
"arxiv:2210.17323",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | text-generation | 2024-10-09T17:22:52Z | ---
tags:
- int8
- vllm
- llm-compressor
language:
- en
pipeline_tag: text-generation
license: apache-2.0
base_model:
- Qwen/Qwen2.5-0.5B
---
# Qwen2.5-0.5B-quantized.w8a16
## Model Overview
- **Model Architecture:** Qwen2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT8
- **Intended Use Cases:** Similarly to [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B), this is a base language model.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 10/09/2024
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It achieves an OpenLLMv1 score of 43.9, compared to 44.0 for [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) to INT8 data type.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights of the linear operators within transformers blocks are quantized.
Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Qwen2.5-0.5B-quantized.w8a16"
number_gpus = 1
max_model_len = 8192
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Evaluation
The model was evaluated on the OpenLLMv1 benchmark, composed of MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
### Accuracy
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Qwen2.5-0.5B</strong>
</td>
<td><strong>Qwen2.5-0.5B-quantized.w8a16<br>(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td rowspan="8" ><strong>OpenLLM v1</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>47.57
</td>
<td>47.81
</td>
<td>100.5%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>34.90
</td>
<td>34.90
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>GSM-8k (5-shot, strict-match)
</td>
<td>34.19
</td>
<td>33.51
</td>
<td>98.0%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>51.83
</td>
<td>51.78
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>55.80
</td>
<td>55.49
</td>
<td>99.4%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>39.90
</td>
<td>39.71
</td>
<td>99.5%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>44.0</strong>
</td>
<td><strong>43.9</strong>
</td>
<td><strong>99.6%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Qwen2.5-0.5B-quantized.w8a16",dtype=auto,max_model_len=4096,add_bos_token=True,tensor_parallel_size=1 \
--tasks openllm \
--batch_size auto
```
|
zelk12/MT5-Gen2-MMG-gemma-2-9B | zelk12 | 2024-11-26T18:29:12Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT5-Gen2-GP-gemma-2-MT1RGDv0.1-9B",
"base_model:merge:zelk12/MT5-Gen2-GP-gemma-2-MT1RGDv0.1-9B",
"base_model:zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B",
"base_model:merge:zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T12:32:39Z | ---
base_model:
- zelk12/MT5-Gen2-GP-gemma-2-MT1RGDv0.1-9B
- zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by [@mradermacher](https://huggingface.co/mradermacher)
GGUF Static: https://huggingface.co/mradermacher/MT5-Gen2-MMG-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT5-Gen2-GP-gemma-2-MT1RGDv0.1-9B](https://huggingface.co/zelk12/MT5-Gen2-GP-gemma-2-MT1RGDv0.1-9B)
* [zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B](https://huggingface.co/zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B
- model: zelk12/MT5-Gen2-GP-gemma-2-MT1RGDv0.1-9B
merge_method: slerp
base_model: zelk12/MT5-Gen2-MM-gemma-2-MT1Av4aA-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
Triangle104/Qwen2.5-14B-Instruct-abliterated-Q6_K-GGUF | Triangle104 | 2024-11-26T18:29:04Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-14B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-14B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-26T18:28:16Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-14B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-14B-Instruct-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-14B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-q6_k.gguf -c 2048
```
|
drusama1979/urdu_text_to_speech_tts | drusama1979 | 2024-11-26T18:22:33Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-11-26T17:25:05Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: urdu_text_to_speech_tts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu_text_to_speech_tts
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8246 | 0.9836 | 30 | 0.8236 |
| 0.8342 | 2.0 | 61 | 0.6613 |
| 0.7407 | 2.9836 | 91 | 0.6161 |
| 0.6744 | 4.0 | 122 | 0.5770 |
| 0.5939 | 4.9180 | 150 | 0.5696 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF | mradermacher | 2024-11-26T18:22:18Z | 16 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T09:30:05Z | ---
base_model: MrRobotoAI/Odin-v1.0-8b-FICTION-1024k
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MrRobotoAI/Odin-v1.0-8b-FICTION-1024k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Odin-v1.0-8b-FICTION-1024k-GGUF/resolve/main/Odin-v1.0-8b-FICTION-1024k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
autoprogrammer/Llama-3.2-1B-Instruct-ja | autoprogrammer | 2024-11-26T18:16:14Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T03:41:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zelk12/MT4-Gen2-MAMU-gemma-2-9B | zelk12 | 2024-11-26T18:12:46Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B",
"base_model:merge:zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B",
"base_model:zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B",
"base_model:merge:zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-22T17:45:01Z | ---
base_model:
- zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B
- zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by [@mradermacher](https://huggingface.co/mradermacher)
GGUF Static: https://huggingface.co/mradermacher/MT4-Gen2-MAMU-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B](https://huggingface.co/zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B)
* [zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B](https://huggingface.co/zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B
- model: zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B
merge_method: slerp
base_model: zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT4-Gen2-IMM-gemma-2-9B | zelk12 | 2024-11-26T18:11:59Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B",
"base_model:merge:zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B",
"base_model:zelk12/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B",
"base_model:merge:zelk12/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-22T17:02:54Z | ---
base_model:
- zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B
- zelk12/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by [@mradermacher](https://huggingface.co/mradermacher)
GGUF Static: https://huggingface.co/mradermacher/MT4-Gen2-IMM-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B](https://huggingface.co/zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B)
* [zelk12/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B](https://huggingface.co/zelk12/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B
- model: zelk12/MT4-Gen2-MM-gemma-2-Rv0.4MT1-9B
merge_method: slerp
base_model: zelk12/MT4-Gen2-IF-gemma-2-MT5MT1-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B | zelk12 | 2024-11-26T18:10:55Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:gmonsoon/SahabatAI-Lion-9B-TIES-v1",
"base_model:merge:gmonsoon/SahabatAI-Lion-9B-TIES-v1",
"base_model:zelk12/MT1-gemma-2-9B",
"base_model:merge:zelk12/MT1-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-22T14:54:19Z | ---
base_model:
- gmonsoon/SahabatAI-Lion-9B-TIES-v1
- zelk12/MT1-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by [@mradermacher](https://huggingface.co/mradermacher)
GGUF Static: https://huggingface.co/mradermacher/MT4-Gen2-MU-gemma-2-Sv1IBTMT1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [gmonsoon/SahabatAI-Lion-9B-TIES-v1](https://huggingface.co/gmonsoon/SahabatAI-Lion-9B-TIES-v1)
* [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: gmonsoon/SahabatAI-Lion-9B-TIES-v1
- model: zelk12/MT1-gemma-2-9B
merge_method: slerp
base_model: gmonsoon/SahabatAI-Lion-9B-TIES-v1
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B | zelk12 | 2024-11-26T18:08:41Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nhyha/N3N_gemma-2-9b-it_20241029_1532",
"base_model:merge:nhyha/N3N_gemma-2-9b-it_20241029_1532",
"base_model:zelk12/MT1-gemma-2-9B",
"base_model:merge:zelk12/MT1-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-22T14:20:52Z | ---
base_model:
- zelk12/MT1-gemma-2-9B
- nhyha/N3N_gemma-2-9b-it_20241029_1532
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by [@mradermacher](https://huggingface.co/mradermacher)
GGUF Static: https://huggingface.co/mradermacher/MT4-Gen2-MA-gemma-2-N3N1532MT1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B)
* [nhyha/N3N_gemma-2-9b-it_20241029_1532](https://huggingface.co/nhyha/N3N_gemma-2-9b-it_20241029_1532)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nhyha/N3N_gemma-2-9b-it_20241029_1532
- model: zelk12/MT1-gemma-2-9B
merge_method: slerp
base_model: nhyha/N3N_gemma-2-9b-it_20241029_1532
dtype: bfloat16
parameters:
t: 0.25
```
|
mradermacher/BigWeave-v25-95b-i1-GGUF | mradermacher | 2024-11-26T18:00:14Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"frankenmerge",
"95b",
"en",
"base_model:llmixer/BigWeave-v25-95b",
"base_model:quantized:llmixer/BigWeave-v25-95b",
"license:unknown",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-26T05:04:26Z | ---
base_model: llmixer/BigWeave-v25-95b
language:
- en
library_name: transformers
license: unknown
quantized_by: mradermacher
tags:
- merge
- frankenmerge
- 95b
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/llmixer/BigWeave-v25-95b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BigWeave-v25-95b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ1_S.gguf) | i1-IQ1_S | 20.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ1_M.gguf) | i1-IQ1_M | 21.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ2_S.gguf) | i1-IQ2_S | 29.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ2_M.gguf) | i1-IQ2_M | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q2_K.gguf) | i1-Q2_K | 34.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 36.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 38.8 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 40.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ3_S.gguf) | i1-IQ3_S | 41.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ3_M.gguf) | i1-IQ3_M | 42.4 | |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 45.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 49.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 50.6 | |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 53.6 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 53.8 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 56.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 65.2 | |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 67.0 | |
| [PART 1](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BigWeave-v25-95b-i1-GGUF/resolve/main/BigWeave-v25-95b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 77.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jeeeeeeeeeeeeeeez/my-tinyreco-model-new-data-bright9 | Jeeeeeeeeeeeeeeez | 2024-11-26T17:55:58Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T17:54:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hotal/honeypot-llama3-8B | hotal | 2024-11-26T17:52:51Z | 75 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"dataset:hotal/honeypot_logs",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-30T20:33:35Z | ---
library_name: transformers
datasets:
- hotal/honeypot_logs
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---
# LLM Honeypot
Code for our paper "LLM Honeypot: Leveraging Large Language Models as Advanced Interactive Honeypot Systems" published in 2024 IEEE Conference on Communications and Network Security (CNS).
You can download the paper via: [[IEEE]](https://ieeexplore.ieee.org/iel8/10735442/10735467/10735607.pdf) - [[DOI]](https://doi.org/10.1109/CNS62487.2024.10735607)
## Abstract
The rapid evolution of cyber threats necessitates innovative solutions for detecting and analyzing malicious activity. Honeypots, which are decoy systems designed to lure and interact with attackers, have emerged as a critical component in cybersecurity. In this paper, we present a novel approach to creating realistic and interactive honeypot systems using Large Language Models (LLMs). By fine-tuning a pre-trained open-source language model on a diverse dataset of attacker-generated commands and responses, we developed a honeypot capable of sophisticated engagement with attackers. Our methodology involved several key steps: data collection and processing, prompt engineering, model selection, and supervised fine-tuning to optimize the model’s performance. Evaluation through similarity metrics and live deployment demonstrated that our approach effectively generates accurate and informative responses. The results highlight the potential of LLMs to revolutionize honeypot technology, providing cybersecurity professionals with a powerful tool to detect and analyze malicious activity, thereby enhancing overall security infrastructure.
## Citation
If this work is helpful, please cite as:
```bibtex
@INPROCEEDINGS{
10735607,
author={Otal, Hakan T. and Canbaz, M. Abdullah},
booktitle={2024 IEEE Conference on Communications and Network Security (CNS)},
title={LLM Honeypot: Leveraging Large Language Models as Advanced Interactive Honeypot Systems},
year={2024},
pages={1-6},
doi={10.1109/CNS62487.2024.10735607}
}
```
## Contact
hotal [AT] albany [DOT] edu |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k60_task2_organization_fold0 | MayBashendy | 2024-11-26T17:50:59Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T17:22:10Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k60_task2_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k60_task2_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6441
- Qwk: 0.4002
- Mse: 0.6441
- Rmse: 0.8025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0106 | 2 | 3.6199 | -0.0024 | 3.6199 | 1.9026 |
| No log | 0.0212 | 4 | 2.4302 | 0.0144 | 2.4302 | 1.5589 |
| No log | 0.0317 | 6 | 1.7528 | -0.0442 | 1.7528 | 1.3239 |
| No log | 0.0423 | 8 | 2.7208 | -0.0514 | 2.7208 | 1.6495 |
| No log | 0.0529 | 10 | 2.7767 | -0.0260 | 2.7767 | 1.6663 |
| No log | 0.0635 | 12 | 1.7445 | 0.0473 | 1.7445 | 1.3208 |
| No log | 0.0741 | 14 | 1.0873 | 0.1567 | 1.0873 | 1.0427 |
| No log | 0.0847 | 16 | 0.9579 | 0.0845 | 0.9579 | 0.9787 |
| No log | 0.0952 | 18 | 1.2416 | 0.1667 | 1.2416 | 1.1143 |
| No log | 0.1058 | 20 | 1.6184 | 0.0761 | 1.6184 | 1.2722 |
| No log | 0.1164 | 22 | 2.1942 | 0.0094 | 2.1942 | 1.4813 |
| No log | 0.1270 | 24 | 2.2283 | 0.0271 | 2.2283 | 1.4928 |
| No log | 0.1376 | 26 | 1.7799 | 0.0112 | 1.7799 | 1.3341 |
| No log | 0.1481 | 28 | 1.5010 | 0.1278 | 1.5010 | 1.2252 |
| No log | 0.1587 | 30 | 1.2735 | 0.1121 | 1.2735 | 1.1285 |
| No log | 0.1693 | 32 | 1.0488 | 0.1421 | 1.0488 | 1.0241 |
| No log | 0.1799 | 34 | 0.8727 | 0.0240 | 0.8727 | 0.9342 |
| No log | 0.1905 | 36 | 0.8513 | -0.1013 | 0.8513 | 0.9226 |
| No log | 0.2011 | 38 | 0.9093 | -0.0771 | 0.9093 | 0.9536 |
| No log | 0.2116 | 40 | 1.0169 | 0.0992 | 1.0169 | 1.0084 |
| No log | 0.2222 | 42 | 1.1067 | 0.1567 | 1.1067 | 1.0520 |
| No log | 0.2328 | 44 | 1.1643 | 0.1567 | 1.1643 | 1.0790 |
| No log | 0.2434 | 46 | 1.2640 | 0.1567 | 1.2640 | 1.1243 |
| No log | 0.2540 | 48 | 1.3999 | 0.1074 | 1.3999 | 1.1832 |
| No log | 0.2646 | 50 | 1.3153 | 0.1567 | 1.3153 | 1.1468 |
| No log | 0.2751 | 52 | 1.2526 | 0.1567 | 1.2526 | 1.1192 |
| No log | 0.2857 | 54 | 1.0790 | 0.1567 | 1.0790 | 1.0388 |
| No log | 0.2963 | 56 | 0.8909 | 0.0597 | 0.8909 | 0.9439 |
| No log | 0.3069 | 58 | 0.8634 | 0.1324 | 0.8634 | 0.9292 |
| No log | 0.3175 | 60 | 0.8795 | 0.1324 | 0.8795 | 0.9378 |
| No log | 0.3280 | 62 | 0.9660 | 0.1172 | 0.9660 | 0.9828 |
| No log | 0.3386 | 64 | 1.0390 | 0.1172 | 1.0390 | 1.0193 |
| No log | 0.3492 | 66 | 1.1169 | 0.1172 | 1.1169 | 1.0568 |
| No log | 0.3598 | 68 | 1.1634 | 0.1172 | 1.1634 | 1.0786 |
| No log | 0.3704 | 70 | 1.1416 | 0.1005 | 1.1416 | 1.0685 |
| No log | 0.3810 | 72 | 1.0946 | 0.1005 | 1.0946 | 1.0462 |
| No log | 0.3915 | 74 | 1.0984 | 0.0442 | 1.0984 | 1.0481 |
| No log | 0.4021 | 76 | 1.1918 | 0.0304 | 1.1918 | 1.0917 |
| No log | 0.4127 | 78 | 1.1823 | 0.1005 | 1.1823 | 1.0874 |
| No log | 0.4233 | 80 | 1.1459 | 0.1005 | 1.1459 | 1.0705 |
| No log | 0.4339 | 82 | 1.0521 | 0.1172 | 1.0521 | 1.0257 |
| No log | 0.4444 | 84 | 0.9626 | 0.1449 | 0.9626 | 0.9811 |
| No log | 0.4550 | 86 | 0.8420 | 0.2017 | 0.8420 | 0.9176 |
| No log | 0.4656 | 88 | 0.8366 | 0.2017 | 0.8366 | 0.9147 |
| No log | 0.4762 | 90 | 0.8960 | 0.1972 | 0.8960 | 0.9466 |
| No log | 0.4868 | 92 | 0.8762 | 0.2389 | 0.8762 | 0.9361 |
| No log | 0.4974 | 94 | 0.8500 | 0.2807 | 0.8500 | 0.9220 |
| No log | 0.5079 | 96 | 0.8162 | 0.2671 | 0.8162 | 0.9035 |
| No log | 0.5185 | 98 | 0.8485 | 0.2671 | 0.8485 | 0.9211 |
| No log | 0.5291 | 100 | 0.8821 | 0.2807 | 0.8821 | 0.9392 |
| No log | 0.5397 | 102 | 0.8708 | 0.3361 | 0.8708 | 0.9332 |
| No log | 0.5503 | 104 | 0.8459 | 0.3498 | 0.8459 | 0.9197 |
| No log | 0.5608 | 106 | 0.8710 | 0.3498 | 0.8710 | 0.9332 |
| No log | 0.5714 | 108 | 0.9859 | 0.3278 | 0.9859 | 0.9929 |
| No log | 0.5820 | 110 | 0.8994 | 0.3498 | 0.8994 | 0.9484 |
| No log | 0.5926 | 112 | 0.7325 | 0.2764 | 0.7325 | 0.8559 |
| No log | 0.6032 | 114 | 0.7457 | 0.1968 | 0.7457 | 0.8635 |
| No log | 0.6138 | 116 | 0.7559 | 0.1176 | 0.7559 | 0.8694 |
| No log | 0.6243 | 118 | 0.7528 | 0.1176 | 0.7528 | 0.8677 |
| No log | 0.6349 | 120 | 0.7722 | 0.2391 | 0.7722 | 0.8788 |
| No log | 0.6455 | 122 | 0.9260 | 0.2378 | 0.9260 | 0.9623 |
| No log | 0.6561 | 124 | 1.2033 | 0.2033 | 1.2033 | 1.0969 |
| No log | 0.6667 | 126 | 1.3707 | 0.1746 | 1.3707 | 1.1708 |
| No log | 0.6772 | 128 | 1.3014 | 0.1396 | 1.3014 | 1.1408 |
| No log | 0.6878 | 130 | 1.1427 | 0.1798 | 1.1427 | 1.0690 |
| No log | 0.6984 | 132 | 0.9649 | 0.2118 | 0.9649 | 0.9823 |
| No log | 0.7090 | 134 | 0.9312 | 0.2598 | 0.9312 | 0.9650 |
| No log | 0.7196 | 136 | 0.9188 | 0.2924 | 0.9188 | 0.9585 |
| No log | 0.7302 | 138 | 0.9093 | 0.3053 | 0.9093 | 0.9536 |
| No log | 0.7407 | 140 | 0.7076 | 0.4046 | 0.7076 | 0.8412 |
| No log | 0.7513 | 142 | 0.6044 | 0.4012 | 0.6044 | 0.7774 |
| No log | 0.7619 | 144 | 0.5893 | 0.4296 | 0.5893 | 0.7676 |
| No log | 0.7725 | 146 | 0.5995 | 0.3708 | 0.5995 | 0.7743 |
| No log | 0.7831 | 148 | 0.6755 | 0.4020 | 0.6755 | 0.8219 |
| No log | 0.7937 | 150 | 0.6914 | 0.3996 | 0.6914 | 0.8315 |
| No log | 0.8042 | 152 | 0.6242 | 0.3817 | 0.6242 | 0.7900 |
| No log | 0.8148 | 154 | 0.6047 | 0.4272 | 0.6047 | 0.7776 |
| No log | 0.8254 | 156 | 0.6479 | 0.4973 | 0.6479 | 0.8049 |
| No log | 0.8360 | 158 | 0.6455 | 0.5421 | 0.6455 | 0.8035 |
| No log | 0.8466 | 160 | 0.6486 | 0.5175 | 0.6486 | 0.8053 |
| No log | 0.8571 | 162 | 0.6445 | 0.5046 | 0.6445 | 0.8028 |
| No log | 0.8677 | 164 | 0.6554 | 0.3641 | 0.6554 | 0.8096 |
| No log | 0.8783 | 166 | 0.6129 | 0.4099 | 0.6129 | 0.7829 |
| No log | 0.8889 | 168 | 0.6137 | 0.3671 | 0.6137 | 0.7834 |
| No log | 0.8995 | 170 | 0.6716 | 0.4097 | 0.6716 | 0.8195 |
| No log | 0.9101 | 172 | 0.6492 | 0.3490 | 0.6492 | 0.8057 |
| No log | 0.9206 | 174 | 0.6505 | 0.3034 | 0.6505 | 0.8065 |
| No log | 0.9312 | 176 | 0.6582 | 0.3742 | 0.6582 | 0.8113 |
| No log | 0.9418 | 178 | 0.6964 | 0.3737 | 0.6964 | 0.8345 |
| No log | 0.9524 | 180 | 0.7210 | 0.4327 | 0.7210 | 0.8491 |
| No log | 0.9630 | 182 | 0.7840 | 0.3025 | 0.7840 | 0.8854 |
| No log | 0.9735 | 184 | 0.7747 | 0.2821 | 0.7747 | 0.8802 |
| No log | 0.9841 | 186 | 0.6987 | 0.3384 | 0.6987 | 0.8359 |
| No log | 0.9947 | 188 | 0.6434 | 0.4320 | 0.6434 | 0.8021 |
| No log | 1.0053 | 190 | 0.6165 | 0.3978 | 0.6165 | 0.7851 |
| No log | 1.0159 | 192 | 0.6192 | 0.4022 | 0.6192 | 0.7869 |
| No log | 1.0265 | 194 | 0.6027 | 0.3896 | 0.6027 | 0.7763 |
| No log | 1.0370 | 196 | 0.6157 | 0.4185 | 0.6157 | 0.7846 |
| No log | 1.0476 | 198 | 0.6537 | 0.3877 | 0.6537 | 0.8085 |
| No log | 1.0582 | 200 | 0.7090 | 0.4090 | 0.7090 | 0.8420 |
| No log | 1.0688 | 202 | 0.6925 | 0.3485 | 0.6925 | 0.8322 |
| No log | 1.0794 | 204 | 0.6894 | 0.3215 | 0.6894 | 0.8303 |
| No log | 1.0899 | 206 | 0.6801 | 0.3494 | 0.6801 | 0.8247 |
| No log | 1.1005 | 208 | 0.6973 | 0.2273 | 0.6973 | 0.8350 |
| No log | 1.1111 | 210 | 0.6509 | 0.3290 | 0.6509 | 0.8068 |
| No log | 1.1217 | 212 | 0.6368 | 0.3013 | 0.6368 | 0.7980 |
| No log | 1.1323 | 214 | 0.6720 | 0.3218 | 0.6720 | 0.8198 |
| No log | 1.1429 | 216 | 0.7583 | 0.2597 | 0.7583 | 0.8708 |
| No log | 1.1534 | 218 | 0.7043 | 0.3422 | 0.7043 | 0.8392 |
| No log | 1.1640 | 220 | 0.6743 | 0.3813 | 0.6743 | 0.8212 |
| No log | 1.1746 | 222 | 0.6803 | 0.4107 | 0.6803 | 0.8248 |
| No log | 1.1852 | 224 | 0.6611 | 0.3668 | 0.6611 | 0.8131 |
| No log | 1.1958 | 226 | 0.6512 | 0.3932 | 0.6512 | 0.8070 |
| No log | 1.2063 | 228 | 0.6153 | 0.3615 | 0.6153 | 0.7844 |
| No log | 1.2169 | 230 | 0.6026 | 0.3454 | 0.6026 | 0.7763 |
| No log | 1.2275 | 232 | 0.6025 | 0.3425 | 0.6025 | 0.7762 |
| No log | 1.2381 | 234 | 0.6922 | 0.2386 | 0.6922 | 0.8320 |
| No log | 1.2487 | 236 | 0.7418 | 0.2302 | 0.7418 | 0.8613 |
| No log | 1.2593 | 238 | 0.7013 | 0.2482 | 0.7013 | 0.8374 |
| No log | 1.2698 | 240 | 0.6452 | 0.3434 | 0.6452 | 0.8032 |
| No log | 1.2804 | 242 | 0.6347 | 0.3301 | 0.6347 | 0.7967 |
| No log | 1.2910 | 244 | 0.6556 | 0.3656 | 0.6556 | 0.8097 |
| No log | 1.3016 | 246 | 0.6448 | 0.4103 | 0.6448 | 0.8030 |
| No log | 1.3122 | 248 | 0.6314 | 0.3960 | 0.6314 | 0.7946 |
| No log | 1.3228 | 250 | 0.6321 | 0.3620 | 0.6321 | 0.7951 |
| No log | 1.3333 | 252 | 0.6121 | 0.3285 | 0.6121 | 0.7824 |
| No log | 1.3439 | 254 | 0.6104 | 0.2689 | 0.6104 | 0.7813 |
| No log | 1.3545 | 256 | 0.6441 | 0.2422 | 0.6441 | 0.8025 |
| No log | 1.3651 | 258 | 0.6547 | 0.2273 | 0.6547 | 0.8091 |
| No log | 1.3757 | 260 | 0.6680 | 0.2113 | 0.6680 | 0.8173 |
| No log | 1.3862 | 262 | 0.6433 | 0.3425 | 0.6433 | 0.8021 |
| No log | 1.3968 | 264 | 0.6705 | 0.3570 | 0.6705 | 0.8189 |
| No log | 1.4074 | 266 | 0.6818 | 0.3738 | 0.6818 | 0.8257 |
| No log | 1.4180 | 268 | 0.6703 | 0.3195 | 0.6703 | 0.8187 |
| No log | 1.4286 | 270 | 0.6987 | 0.2638 | 0.6987 | 0.8359 |
| No log | 1.4392 | 272 | 0.7649 | 0.2761 | 0.7649 | 0.8746 |
| No log | 1.4497 | 274 | 0.7206 | 0.2225 | 0.7206 | 0.8489 |
| No log | 1.4603 | 276 | 0.6411 | 0.2386 | 0.6411 | 0.8007 |
| No log | 1.4709 | 278 | 0.5730 | 0.2653 | 0.5730 | 0.7570 |
| No log | 1.4815 | 280 | 0.5509 | 0.4152 | 0.5509 | 0.7423 |
| No log | 1.4921 | 282 | 0.5669 | 0.4828 | 0.5669 | 0.7529 |
| No log | 1.5026 | 284 | 0.5626 | 0.4828 | 0.5626 | 0.7501 |
| No log | 1.5132 | 286 | 0.5572 | 0.4229 | 0.5572 | 0.7464 |
| No log | 1.5238 | 288 | 0.5471 | 0.4638 | 0.5471 | 0.7397 |
| No log | 1.5344 | 290 | 0.5486 | 0.3980 | 0.5486 | 0.7407 |
| No log | 1.5450 | 292 | 0.5723 | 0.3714 | 0.5723 | 0.7565 |
| No log | 1.5556 | 294 | 0.5879 | 0.3728 | 0.5879 | 0.7668 |
| No log | 1.5661 | 296 | 0.5705 | 0.4037 | 0.5705 | 0.7553 |
| No log | 1.5767 | 298 | 0.5539 | 0.5081 | 0.5539 | 0.7443 |
| No log | 1.5873 | 300 | 0.5640 | 0.4677 | 0.5640 | 0.7510 |
| No log | 1.5979 | 302 | 0.5613 | 0.4970 | 0.5613 | 0.7492 |
| No log | 1.6085 | 304 | 0.5532 | 0.4345 | 0.5532 | 0.7438 |
| No log | 1.6190 | 306 | 0.5859 | 0.4470 | 0.5859 | 0.7654 |
| No log | 1.6296 | 308 | 0.5764 | 0.4619 | 0.5764 | 0.7592 |
| No log | 1.6402 | 310 | 0.5325 | 0.3719 | 0.5325 | 0.7297 |
| No log | 1.6508 | 312 | 0.5315 | 0.4667 | 0.5315 | 0.7290 |
| No log | 1.6614 | 314 | 0.5334 | 0.4196 | 0.5334 | 0.7303 |
| No log | 1.6720 | 316 | 0.6246 | 0.3417 | 0.6246 | 0.7903 |
| No log | 1.6825 | 318 | 0.6892 | 0.3486 | 0.6892 | 0.8302 |
| No log | 1.6931 | 320 | 0.6383 | 0.3218 | 0.6383 | 0.7989 |
| No log | 1.7037 | 322 | 0.5904 | 0.3809 | 0.5904 | 0.7684 |
| No log | 1.7143 | 324 | 0.5369 | 0.4330 | 0.5369 | 0.7327 |
| No log | 1.7249 | 326 | 0.5246 | 0.4347 | 0.5246 | 0.7243 |
| No log | 1.7354 | 328 | 0.5318 | 0.3838 | 0.5318 | 0.7292 |
| No log | 1.7460 | 330 | 0.5395 | 0.3381 | 0.5395 | 0.7345 |
| No log | 1.7566 | 332 | 0.5700 | 0.4311 | 0.5700 | 0.7550 |
| No log | 1.7672 | 334 | 0.5786 | 0.3840 | 0.5786 | 0.7607 |
| No log | 1.7778 | 336 | 0.5709 | 0.3127 | 0.5709 | 0.7556 |
| No log | 1.7884 | 338 | 0.5758 | 0.3127 | 0.5758 | 0.7588 |
| No log | 1.7989 | 340 | 0.5578 | 0.3457 | 0.5578 | 0.7469 |
| No log | 1.8095 | 342 | 0.5541 | 0.3478 | 0.5541 | 0.7444 |
| No log | 1.8201 | 344 | 0.5492 | 0.4789 | 0.5492 | 0.7411 |
| No log | 1.8307 | 346 | 0.5474 | 0.4330 | 0.5474 | 0.7399 |
| No log | 1.8413 | 348 | 0.5599 | 0.4304 | 0.5599 | 0.7483 |
| No log | 1.8519 | 350 | 0.5711 | 0.4150 | 0.5711 | 0.7557 |
| No log | 1.8624 | 352 | 0.5536 | 0.4022 | 0.5536 | 0.7441 |
| No log | 1.8730 | 354 | 0.5459 | 0.4330 | 0.5459 | 0.7389 |
| No log | 1.8836 | 356 | 0.5427 | 0.4330 | 0.5427 | 0.7367 |
| No log | 1.8942 | 358 | 0.5537 | 0.4167 | 0.5537 | 0.7441 |
| No log | 1.9048 | 360 | 0.5903 | 0.4612 | 0.5903 | 0.7683 |
| No log | 1.9153 | 362 | 0.6603 | 0.3364 | 0.6603 | 0.8126 |
| No log | 1.9259 | 364 | 0.7160 | 0.3662 | 0.7160 | 0.8461 |
| No log | 1.9365 | 366 | 0.6650 | 0.4483 | 0.6650 | 0.8154 |
| No log | 1.9471 | 368 | 0.5919 | 0.4359 | 0.5919 | 0.7694 |
| No log | 1.9577 | 370 | 0.5650 | 0.4230 | 0.5650 | 0.7516 |
| No log | 1.9683 | 372 | 0.5660 | 0.4238 | 0.5660 | 0.7523 |
| No log | 1.9788 | 374 | 0.5740 | 0.4520 | 0.5740 | 0.7576 |
| No log | 1.9894 | 376 | 0.6053 | 0.3897 | 0.6053 | 0.7780 |
| No log | 2.0 | 378 | 0.6492 | 0.2939 | 0.6492 | 0.8057 |
| No log | 2.0106 | 380 | 0.6553 | 0.3751 | 0.6553 | 0.8095 |
| No log | 2.0212 | 382 | 0.6361 | 0.3249 | 0.6361 | 0.7975 |
| No log | 2.0317 | 384 | 0.6023 | 0.3828 | 0.6023 | 0.7761 |
| No log | 2.0423 | 386 | 0.5809 | 0.3580 | 0.5809 | 0.7622 |
| No log | 2.0529 | 388 | 0.5862 | 0.3453 | 0.5862 | 0.7656 |
| No log | 2.0635 | 390 | 0.5856 | 0.3443 | 0.5856 | 0.7653 |
| No log | 2.0741 | 392 | 0.5875 | 0.3443 | 0.5875 | 0.7665 |
| No log | 2.0847 | 394 | 0.5844 | 0.3499 | 0.5844 | 0.7645 |
| No log | 2.0952 | 396 | 0.5926 | 0.3838 | 0.5926 | 0.7698 |
| No log | 2.1058 | 398 | 0.6013 | 0.3819 | 0.6013 | 0.7755 |
| No log | 2.1164 | 400 | 0.5843 | 0.3499 | 0.5843 | 0.7644 |
| No log | 2.1270 | 402 | 0.5720 | 0.2736 | 0.5720 | 0.7563 |
| No log | 2.1376 | 404 | 0.5819 | 0.3295 | 0.5819 | 0.7628 |
| No log | 2.1481 | 406 | 0.5787 | 0.2822 | 0.5787 | 0.7607 |
| No log | 2.1587 | 408 | 0.5849 | 0.3752 | 0.5849 | 0.7648 |
| No log | 2.1693 | 410 | 0.6105 | 0.4137 | 0.6105 | 0.7813 |
| No log | 2.1799 | 412 | 0.6098 | 0.3969 | 0.6098 | 0.7809 |
| No log | 2.1905 | 414 | 0.6192 | 0.3809 | 0.6192 | 0.7869 |
| No log | 2.2011 | 416 | 0.6091 | 0.3292 | 0.6091 | 0.7804 |
| No log | 2.2116 | 418 | 0.6121 | 0.3292 | 0.6121 | 0.7824 |
| No log | 2.2222 | 420 | 0.5922 | 0.3292 | 0.5922 | 0.7696 |
| No log | 2.2328 | 422 | 0.5803 | 0.3786 | 0.5803 | 0.7618 |
| No log | 2.2434 | 424 | 0.5660 | 0.3204 | 0.5660 | 0.7523 |
| No log | 2.2540 | 426 | 0.5602 | 0.3530 | 0.5602 | 0.7485 |
| No log | 2.2646 | 428 | 0.5547 | 0.3989 | 0.5547 | 0.7448 |
| No log | 2.2751 | 430 | 0.5577 | 0.3796 | 0.5577 | 0.7468 |
| No log | 2.2857 | 432 | 0.5897 | 0.3457 | 0.5897 | 0.7679 |
| No log | 2.2963 | 434 | 0.6184 | 0.3649 | 0.6184 | 0.7864 |
| No log | 2.3069 | 436 | 0.6144 | 0.3649 | 0.6144 | 0.7838 |
| No log | 2.3175 | 438 | 0.5994 | 0.3649 | 0.5994 | 0.7742 |
| No log | 2.3280 | 440 | 0.5904 | 0.3786 | 0.5904 | 0.7684 |
| No log | 2.3386 | 442 | 0.5706 | 0.3816 | 0.5706 | 0.7554 |
| No log | 2.3492 | 444 | 0.5713 | 0.3960 | 0.5713 | 0.7558 |
| No log | 2.3598 | 446 | 0.5743 | 0.3642 | 0.5743 | 0.7578 |
| No log | 2.3704 | 448 | 0.6048 | 0.3809 | 0.6048 | 0.7777 |
| No log | 2.3810 | 450 | 0.6537 | 0.3530 | 0.6537 | 0.8085 |
| No log | 2.3915 | 452 | 0.6458 | 0.3840 | 0.6458 | 0.8036 |
| No log | 2.4021 | 454 | 0.6149 | 0.3350 | 0.6149 | 0.7842 |
| No log | 2.4127 | 456 | 0.5741 | 0.3642 | 0.5741 | 0.7577 |
| No log | 2.4233 | 458 | 0.5707 | 0.2724 | 0.5707 | 0.7555 |
| No log | 2.4339 | 460 | 0.5757 | 0.3275 | 0.5757 | 0.7588 |
| No log | 2.4444 | 462 | 0.5737 | 0.2560 | 0.5737 | 0.7574 |
| No log | 2.4550 | 464 | 0.5835 | 0.3314 | 0.5835 | 0.7638 |
| No log | 2.4656 | 466 | 0.6405 | 0.3292 | 0.6405 | 0.8003 |
| No log | 2.4762 | 468 | 0.6800 | 0.4153 | 0.6800 | 0.8246 |
| No log | 2.4868 | 470 | 0.6621 | 0.3869 | 0.6621 | 0.8137 |
| No log | 2.4974 | 472 | 0.6129 | 0.3520 | 0.6129 | 0.7829 |
| No log | 2.5079 | 474 | 0.5930 | 0.3520 | 0.5930 | 0.7701 |
| No log | 2.5185 | 476 | 0.5786 | 0.3489 | 0.5786 | 0.7607 |
| No log | 2.5291 | 478 | 0.5783 | 0.3489 | 0.5783 | 0.7605 |
| No log | 2.5397 | 480 | 0.6062 | 0.3695 | 0.6062 | 0.7786 |
| No log | 2.5503 | 482 | 0.6618 | 0.4153 | 0.6618 | 0.8135 |
| No log | 2.5608 | 484 | 0.7008 | 0.3558 | 0.7008 | 0.8371 |
| No log | 2.5714 | 486 | 0.6842 | 0.3558 | 0.6842 | 0.8272 |
| No log | 2.5820 | 488 | 0.6068 | 0.4161 | 0.6068 | 0.7790 |
| No log | 2.5926 | 490 | 0.5511 | 0.3848 | 0.5511 | 0.7424 |
| No log | 2.6032 | 492 | 0.5337 | 0.4025 | 0.5337 | 0.7306 |
| No log | 2.6138 | 494 | 0.5348 | 0.4067 | 0.5348 | 0.7313 |
| No log | 2.6243 | 496 | 0.5336 | 0.4211 | 0.5336 | 0.7305 |
| No log | 2.6349 | 498 | 0.5404 | 0.4185 | 0.5404 | 0.7351 |
| 0.3537 | 2.6455 | 500 | 0.5741 | 0.5054 | 0.5741 | 0.7577 |
| 0.3537 | 2.6561 | 502 | 0.6190 | 0.4153 | 0.6190 | 0.7867 |
| 0.3537 | 2.6667 | 504 | 0.6227 | 0.4457 | 0.6227 | 0.7891 |
| 0.3537 | 2.6772 | 506 | 0.5908 | 0.4470 | 0.5908 | 0.7686 |
| 0.3537 | 2.6878 | 508 | 0.5443 | 0.4359 | 0.5443 | 0.7377 |
| 0.3537 | 2.6984 | 510 | 0.5318 | 0.4040 | 0.5318 | 0.7293 |
| 0.3537 | 2.7090 | 512 | 0.5385 | 0.4040 | 0.5385 | 0.7338 |
| 0.3537 | 2.7196 | 514 | 0.5666 | 0.3897 | 0.5666 | 0.7527 |
| 0.3537 | 2.7302 | 516 | 0.6238 | 0.4153 | 0.6238 | 0.7898 |
| 0.3537 | 2.7407 | 518 | 0.6571 | 0.4002 | 0.6571 | 0.8106 |
| 0.3537 | 2.7513 | 520 | 0.6501 | 0.4153 | 0.6501 | 0.8063 |
| 0.3537 | 2.7619 | 522 | 0.6262 | 0.3831 | 0.6262 | 0.7913 |
| 0.3537 | 2.7725 | 524 | 0.5997 | 0.3489 | 0.5997 | 0.7744 |
| 0.3537 | 2.7831 | 526 | 0.6046 | 0.3329 | 0.6046 | 0.7776 |
| 0.3537 | 2.7937 | 528 | 0.6138 | 0.4153 | 0.6138 | 0.7835 |
| 0.3537 | 2.8042 | 530 | 0.6002 | 0.4011 | 0.6002 | 0.7747 |
| 0.3537 | 2.8148 | 532 | 0.5737 | 0.3446 | 0.5737 | 0.7574 |
| 0.3537 | 2.8254 | 534 | 0.5689 | 0.3888 | 0.5689 | 0.7543 |
| 0.3537 | 2.8360 | 536 | 0.5913 | 0.4170 | 0.5913 | 0.7689 |
| 0.3537 | 2.8466 | 538 | 0.5837 | 0.4170 | 0.5837 | 0.7640 |
| 0.3537 | 2.8571 | 540 | 0.5519 | 0.3222 | 0.5519 | 0.7429 |
| 0.3537 | 2.8677 | 542 | 0.5314 | 0.3857 | 0.5314 | 0.7290 |
| 0.3537 | 2.8783 | 544 | 0.5290 | 0.3857 | 0.5290 | 0.7273 |
| 0.3537 | 2.8889 | 546 | 0.5381 | 0.3877 | 0.5381 | 0.7336 |
| 0.3537 | 2.8995 | 548 | 0.5567 | 0.3819 | 0.5567 | 0.7461 |
| 0.3537 | 2.9101 | 550 | 0.5796 | 0.3831 | 0.5796 | 0.7613 |
| 0.3537 | 2.9206 | 552 | 0.5693 | 0.3446 | 0.5693 | 0.7545 |
| 0.3537 | 2.9312 | 554 | 0.5486 | 0.3467 | 0.5486 | 0.7406 |
| 0.3537 | 2.9418 | 556 | 0.5306 | 0.3530 | 0.5306 | 0.7285 |
| 0.3537 | 2.9524 | 558 | 0.5239 | 0.3689 | 0.5239 | 0.7238 |
| 0.3537 | 2.9630 | 560 | 0.5296 | 0.3530 | 0.5296 | 0.7277 |
| 0.3537 | 2.9735 | 562 | 0.5554 | 0.3510 | 0.5554 | 0.7453 |
| 0.3537 | 2.9841 | 564 | 0.5865 | 0.3685 | 0.5865 | 0.7658 |
| 0.3537 | 2.9947 | 566 | 0.6308 | 0.3675 | 0.6308 | 0.7942 |
| 0.3537 | 3.0053 | 568 | 0.6720 | 0.3851 | 0.6720 | 0.8198 |
| 0.3537 | 3.0159 | 570 | 0.6466 | 0.4002 | 0.6466 | 0.8041 |
| 0.3537 | 3.0265 | 572 | 0.5833 | 0.3695 | 0.5833 | 0.7637 |
| 0.3537 | 3.0370 | 574 | 0.5367 | 0.2853 | 0.5367 | 0.7326 |
| 0.3537 | 3.0476 | 576 | 0.5327 | 0.3867 | 0.5327 | 0.7299 |
| 0.3537 | 3.0582 | 578 | 0.5350 | 0.4025 | 0.5350 | 0.7314 |
| 0.3537 | 3.0688 | 580 | 0.5332 | 0.3550 | 0.5332 | 0.7302 |
| 0.3537 | 3.0794 | 582 | 0.5439 | 0.3324 | 0.5439 | 0.7375 |
| 0.3537 | 3.0899 | 584 | 0.5542 | 0.3669 | 0.5542 | 0.7444 |
| 0.3537 | 3.1005 | 586 | 0.5702 | 0.3840 | 0.5702 | 0.7551 |
| 0.3537 | 3.1111 | 588 | 0.5932 | 0.3831 | 0.5932 | 0.7702 |
| 0.3537 | 3.1217 | 590 | 0.5802 | 0.3831 | 0.5802 | 0.7617 |
| 0.3537 | 3.1323 | 592 | 0.5459 | 0.3714 | 0.5459 | 0.7388 |
| 0.3537 | 3.1429 | 594 | 0.5249 | 0.3867 | 0.5249 | 0.7245 |
| 0.3537 | 3.1534 | 596 | 0.5317 | 0.4034 | 0.5317 | 0.7292 |
| 0.3537 | 3.1640 | 598 | 0.5332 | 0.3876 | 0.5332 | 0.7302 |
| 0.3537 | 3.1746 | 600 | 0.5289 | 0.3867 | 0.5289 | 0.7272 |
| 0.3537 | 3.1852 | 602 | 0.5434 | 0.3324 | 0.5434 | 0.7371 |
| 0.3537 | 3.1958 | 604 | 0.5804 | 0.3127 | 0.5804 | 0.7619 |
| 0.3537 | 3.2063 | 606 | 0.5915 | 0.3179 | 0.5915 | 0.7691 |
| 0.3537 | 3.2169 | 608 | 0.5871 | 0.3339 | 0.5871 | 0.7662 |
| 0.3537 | 3.2275 | 610 | 0.5685 | 0.3467 | 0.5685 | 0.7540 |
| 0.3537 | 3.2381 | 612 | 0.5576 | 0.3987 | 0.5576 | 0.7467 |
| 0.3537 | 3.2487 | 614 | 0.5523 | 0.3243 | 0.5523 | 0.7432 |
| 0.3537 | 3.2593 | 616 | 0.5540 | 0.3905 | 0.5540 | 0.7443 |
| 0.3537 | 3.2698 | 618 | 0.5562 | 0.3905 | 0.5562 | 0.7458 |
| 0.3537 | 3.2804 | 620 | 0.5588 | 0.3997 | 0.5588 | 0.7475 |
| 0.3537 | 3.2910 | 622 | 0.5614 | 0.3997 | 0.5614 | 0.7492 |
| 0.3537 | 3.3016 | 624 | 0.5563 | 0.4146 | 0.5563 | 0.7459 |
| 0.3537 | 3.3122 | 626 | 0.5549 | 0.4006 | 0.5549 | 0.7449 |
| 0.3537 | 3.3228 | 628 | 0.5584 | 0.3402 | 0.5584 | 0.7472 |
| 0.3537 | 3.3333 | 630 | 0.5637 | 0.3905 | 0.5637 | 0.7508 |
| 0.3537 | 3.3439 | 632 | 0.5657 | 0.3905 | 0.5657 | 0.7522 |
| 0.3537 | 3.3545 | 634 | 0.5566 | 0.3905 | 0.5566 | 0.7460 |
| 0.3537 | 3.3651 | 636 | 0.5496 | 0.3550 | 0.5496 | 0.7413 |
| 0.3537 | 3.3757 | 638 | 0.5548 | 0.3540 | 0.5548 | 0.7448 |
| 0.3537 | 3.3862 | 640 | 0.5565 | 0.3689 | 0.5565 | 0.7460 |
| 0.3537 | 3.3968 | 642 | 0.5544 | 0.3560 | 0.5544 | 0.7446 |
| 0.3537 | 3.4074 | 644 | 0.5610 | 0.3550 | 0.5610 | 0.7490 |
| 0.3537 | 3.4180 | 646 | 0.5699 | 0.3699 | 0.5699 | 0.7549 |
| 0.3537 | 3.4286 | 648 | 0.5757 | 0.4031 | 0.5757 | 0.7587 |
| 0.3537 | 3.4392 | 650 | 0.5827 | 0.4031 | 0.5827 | 0.7633 |
| 0.3537 | 3.4497 | 652 | 0.5844 | 0.4031 | 0.5844 | 0.7645 |
| 0.3537 | 3.4603 | 654 | 0.5836 | 0.4031 | 0.5836 | 0.7639 |
| 0.3537 | 3.4709 | 656 | 0.5860 | 0.4031 | 0.5860 | 0.7655 |
| 0.3537 | 3.4815 | 658 | 0.6025 | 0.4055 | 0.6025 | 0.7762 |
| 0.3537 | 3.4921 | 660 | 0.6219 | 0.4359 | 0.6219 | 0.7886 |
| 0.3537 | 3.5026 | 662 | 0.6511 | 0.4626 | 0.6511 | 0.8069 |
| 0.3537 | 3.5132 | 664 | 0.6717 | 0.4633 | 0.6717 | 0.8196 |
| 0.3537 | 3.5238 | 666 | 0.6527 | 0.4224 | 0.6527 | 0.8079 |
| 0.3537 | 3.5344 | 668 | 0.6128 | 0.4359 | 0.6128 | 0.7828 |
| 0.3537 | 3.5450 | 670 | 0.5914 | 0.4031 | 0.5914 | 0.7691 |
| 0.3537 | 3.5556 | 672 | 0.5799 | 0.3550 | 0.5799 | 0.7615 |
| 0.3537 | 3.5661 | 674 | 0.5776 | 0.3550 | 0.5776 | 0.7600 |
| 0.3537 | 3.5767 | 676 | 0.5834 | 0.3699 | 0.5834 | 0.7638 |
| 0.3537 | 3.5873 | 678 | 0.6014 | 0.3848 | 0.6014 | 0.7755 |
| 0.3537 | 3.5979 | 680 | 0.6374 | 0.4189 | 0.6374 | 0.7984 |
| 0.3537 | 3.6085 | 682 | 0.6721 | 0.3742 | 0.6721 | 0.8198 |
| 0.3537 | 3.6190 | 684 | 0.6854 | 0.3742 | 0.6854 | 0.8279 |
| 0.3537 | 3.6296 | 686 | 0.6602 | 0.3751 | 0.6602 | 0.8125 |
| 0.3537 | 3.6402 | 688 | 0.6214 | 0.4461 | 0.6214 | 0.7883 |
| 0.3537 | 3.6508 | 690 | 0.6040 | 0.4167 | 0.6040 | 0.7772 |
| 0.3537 | 3.6614 | 692 | 0.6113 | 0.4004 | 0.6113 | 0.7818 |
| 0.3537 | 3.6720 | 694 | 0.6191 | 0.4150 | 0.6191 | 0.7868 |
| 0.3537 | 3.6825 | 696 | 0.6269 | 0.3019 | 0.6269 | 0.7918 |
| 0.3537 | 3.6931 | 698 | 0.6231 | 0.2795 | 0.6231 | 0.7893 |
| 0.3537 | 3.7037 | 700 | 0.6270 | 0.2629 | 0.6270 | 0.7918 |
| 0.3537 | 3.7143 | 702 | 0.6191 | 0.2629 | 0.6191 | 0.7868 |
| 0.3537 | 3.7249 | 704 | 0.6075 | 0.3446 | 0.6075 | 0.7794 |
| 0.3537 | 3.7354 | 706 | 0.5912 | 0.3381 | 0.5912 | 0.7689 |
| 0.3537 | 3.7460 | 708 | 0.5852 | 0.3540 | 0.5852 | 0.7650 |
| 0.3537 | 3.7566 | 710 | 0.5924 | 0.3689 | 0.5924 | 0.7697 |
| 0.3537 | 3.7672 | 712 | 0.6097 | 0.3978 | 0.6097 | 0.7808 |
| 0.3537 | 3.7778 | 714 | 0.6273 | 0.3611 | 0.6273 | 0.7920 |
| 0.3537 | 3.7884 | 716 | 0.6465 | 0.3115 | 0.6465 | 0.8040 |
| 0.3537 | 3.7989 | 718 | 0.6554 | 0.3115 | 0.6554 | 0.8096 |
| 0.3537 | 3.8095 | 720 | 0.6607 | 0.3281 | 0.6607 | 0.8129 |
| 0.3537 | 3.8201 | 722 | 0.6453 | 0.3457 | 0.6453 | 0.8033 |
| 0.3537 | 3.8307 | 724 | 0.6204 | 0.2830 | 0.6204 | 0.7877 |
| 0.3537 | 3.8413 | 726 | 0.6030 | 0.3371 | 0.6030 | 0.7765 |
| 0.3537 | 3.8519 | 728 | 0.6171 | 0.3404 | 0.6171 | 0.7856 |
| 0.3537 | 3.8624 | 730 | 0.6375 | 0.4161 | 0.6375 | 0.7984 |
| 0.3537 | 3.8730 | 732 | 0.6520 | 0.3751 | 0.6520 | 0.8074 |
| 0.3537 | 3.8836 | 734 | 0.6545 | 0.3888 | 0.6545 | 0.8090 |
| 0.3537 | 3.8942 | 736 | 0.6343 | 0.4141 | 0.6343 | 0.7964 |
| 0.3537 | 3.9048 | 738 | 0.5991 | 0.2689 | 0.5991 | 0.7740 |
| 0.3537 | 3.9153 | 740 | 0.5813 | 0.2853 | 0.5813 | 0.7624 |
| 0.3537 | 3.9259 | 742 | 0.5936 | 0.3006 | 0.5936 | 0.7705 |
| 0.3537 | 3.9365 | 744 | 0.6093 | 0.2677 | 0.6093 | 0.7806 |
| 0.3537 | 3.9471 | 746 | 0.6285 | 0.3849 | 0.6285 | 0.7928 |
| 0.3537 | 3.9577 | 748 | 0.6450 | 0.3840 | 0.6450 | 0.8031 |
| 0.3537 | 3.9683 | 750 | 0.6420 | 0.3995 | 0.6420 | 0.8013 |
| 0.3537 | 3.9788 | 752 | 0.6137 | 0.2903 | 0.6137 | 0.7834 |
| 0.3537 | 3.9894 | 754 | 0.5920 | 0.2689 | 0.5920 | 0.7694 |
| 0.3537 | 4.0 | 756 | 0.5765 | 0.3029 | 0.5765 | 0.7593 |
| 0.3537 | 4.0106 | 758 | 0.5720 | 0.3029 | 0.5720 | 0.7563 |
| 0.3537 | 4.0212 | 760 | 0.5679 | 0.3193 | 0.5679 | 0.7536 |
| 0.3537 | 4.0317 | 762 | 0.5660 | 0.3193 | 0.5660 | 0.7523 |
| 0.3537 | 4.0423 | 764 | 0.5650 | 0.3193 | 0.5650 | 0.7517 |
| 0.3537 | 4.0529 | 766 | 0.5743 | 0.3182 | 0.5743 | 0.7578 |
| 0.3537 | 4.0635 | 768 | 0.5876 | 0.2842 | 0.5876 | 0.7665 |
| 0.3537 | 4.0741 | 770 | 0.5828 | 0.3062 | 0.5828 | 0.7634 |
| 0.3537 | 4.0847 | 772 | 0.5665 | 0.3182 | 0.5665 | 0.7527 |
| 0.3537 | 4.0952 | 774 | 0.5583 | 0.3550 | 0.5583 | 0.7472 |
| 0.3537 | 4.1058 | 776 | 0.5664 | 0.4031 | 0.5664 | 0.7526 |
| 0.3537 | 4.1164 | 778 | 0.5702 | 0.3877 | 0.5702 | 0.7551 |
| 0.3537 | 4.1270 | 780 | 0.5789 | 0.3714 | 0.5789 | 0.7609 |
| 0.3537 | 4.1376 | 782 | 0.5873 | 0.3849 | 0.5873 | 0.7664 |
| 0.3537 | 4.1481 | 784 | 0.5847 | 0.3868 | 0.5847 | 0.7646 |
| 0.3537 | 4.1587 | 786 | 0.5935 | 0.4313 | 0.5935 | 0.7704 |
| 0.3537 | 4.1693 | 788 | 0.5967 | 0.4313 | 0.5967 | 0.7724 |
| 0.3537 | 4.1799 | 790 | 0.5902 | 0.3868 | 0.5902 | 0.7682 |
| 0.3537 | 4.1905 | 792 | 0.5756 | 0.3877 | 0.5756 | 0.7587 |
| 0.3537 | 4.2011 | 794 | 0.5704 | 0.3699 | 0.5704 | 0.7552 |
| 0.3537 | 4.2116 | 796 | 0.5701 | 0.3699 | 0.5701 | 0.7550 |
| 0.3537 | 4.2222 | 798 | 0.5681 | 0.3699 | 0.5681 | 0.7537 |
| 0.3537 | 4.2328 | 800 | 0.5677 | 0.3699 | 0.5677 | 0.7534 |
| 0.3537 | 4.2434 | 802 | 0.5924 | 0.4004 | 0.5924 | 0.7697 |
| 0.3537 | 4.2540 | 804 | 0.6275 | 0.4020 | 0.6275 | 0.7922 |
| 0.3537 | 4.2646 | 806 | 0.6269 | 0.4020 | 0.6269 | 0.7918 |
| 0.3537 | 4.2751 | 808 | 0.6164 | 0.4461 | 0.6164 | 0.7851 |
| 0.3537 | 4.2857 | 810 | 0.5899 | 0.4611 | 0.5899 | 0.7680 |
| 0.3537 | 4.2963 | 812 | 0.5527 | 0.3877 | 0.5527 | 0.7435 |
| 0.3537 | 4.3069 | 814 | 0.5411 | 0.3857 | 0.5411 | 0.7356 |
| 0.3537 | 4.3175 | 816 | 0.5474 | 0.3877 | 0.5474 | 0.7399 |
| 0.3537 | 4.3280 | 818 | 0.5534 | 0.4466 | 0.5534 | 0.7439 |
| 0.3537 | 4.3386 | 820 | 0.5523 | 0.4466 | 0.5523 | 0.7431 |
| 0.3537 | 4.3492 | 822 | 0.5551 | 0.4146 | 0.5551 | 0.7450 |
| 0.3537 | 4.3598 | 824 | 0.5548 | 0.4475 | 0.5548 | 0.7449 |
| 0.3537 | 4.3704 | 826 | 0.5576 | 0.4321 | 0.5576 | 0.7467 |
| 0.3537 | 4.3810 | 828 | 0.5785 | 0.3987 | 0.5785 | 0.7606 |
| 0.3537 | 4.3915 | 830 | 0.6080 | 0.3510 | 0.6080 | 0.7797 |
| 0.3537 | 4.4021 | 832 | 0.6230 | 0.3510 | 0.6230 | 0.7893 |
| 0.3537 | 4.4127 | 834 | 0.6126 | 0.3510 | 0.6126 | 0.7827 |
| 0.3537 | 4.4233 | 836 | 0.5890 | 0.3828 | 0.5890 | 0.7675 |
| 0.3537 | 4.4339 | 838 | 0.5781 | 0.3987 | 0.5781 | 0.7603 |
| 0.3537 | 4.4444 | 840 | 0.5757 | 0.3987 | 0.5757 | 0.7588 |
| 0.3537 | 4.4550 | 842 | 0.5697 | 0.3530 | 0.5697 | 0.7548 |
| 0.3537 | 4.4656 | 844 | 0.5651 | 0.3006 | 0.5651 | 0.7517 |
| 0.3537 | 4.4762 | 846 | 0.5695 | 0.2842 | 0.5695 | 0.7546 |
| 0.3537 | 4.4868 | 848 | 0.5834 | 0.3149 | 0.5834 | 0.7638 |
| 0.3537 | 4.4974 | 850 | 0.5945 | 0.3457 | 0.5945 | 0.7710 |
| 0.3537 | 4.5079 | 852 | 0.5946 | 0.3457 | 0.5946 | 0.7711 |
| 0.3537 | 4.5185 | 854 | 0.5846 | 0.3149 | 0.5846 | 0.7646 |
| 0.3537 | 4.5291 | 856 | 0.5736 | 0.3149 | 0.5736 | 0.7574 |
| 0.3537 | 4.5397 | 858 | 0.5685 | 0.3314 | 0.5685 | 0.7540 |
| 0.3537 | 4.5503 | 860 | 0.5632 | 0.2842 | 0.5632 | 0.7505 |
| 0.3537 | 4.5608 | 862 | 0.5654 | 0.2842 | 0.5654 | 0.7519 |
| 0.3537 | 4.5714 | 864 | 0.5810 | 0.3303 | 0.5810 | 0.7623 |
| 0.3537 | 4.5820 | 866 | 0.5938 | 0.2807 | 0.5938 | 0.7706 |
| 0.3537 | 4.5926 | 868 | 0.6012 | 0.3179 | 0.6012 | 0.7754 |
| 0.3537 | 4.6032 | 870 | 0.6144 | 0.3019 | 0.6144 | 0.7838 |
| 0.3537 | 4.6138 | 872 | 0.6211 | 0.3019 | 0.6211 | 0.7881 |
| 0.3537 | 4.6243 | 874 | 0.6310 | 0.3019 | 0.6310 | 0.7944 |
| 0.3537 | 4.6349 | 876 | 0.6389 | 0.3417 | 0.6389 | 0.7993 |
| 0.3537 | 4.6455 | 878 | 0.6766 | 0.3163 | 0.6766 | 0.8226 |
| 0.3537 | 4.6561 | 880 | 0.7094 | 0.3486 | 0.7094 | 0.8423 |
| 0.3537 | 4.6667 | 882 | 0.7359 | 0.3714 | 0.7359 | 0.8578 |
| 0.3537 | 4.6772 | 884 | 0.7110 | 0.3820 | 0.7110 | 0.8432 |
| 0.3537 | 4.6878 | 886 | 0.6441 | 0.3879 | 0.6441 | 0.8026 |
| 0.3537 | 4.6984 | 888 | 0.5703 | 0.4037 | 0.5703 | 0.7552 |
| 0.3537 | 4.7090 | 890 | 0.5439 | 0.4345 | 0.5439 | 0.7375 |
| 0.3537 | 4.7196 | 892 | 0.5334 | 0.4049 | 0.5334 | 0.7304 |
| 0.3537 | 4.7302 | 894 | 0.5342 | 0.3709 | 0.5342 | 0.7309 |
| 0.3537 | 4.7407 | 896 | 0.5421 | 0.3018 | 0.5421 | 0.7363 |
| 0.3537 | 4.7513 | 898 | 0.5638 | 0.3062 | 0.5638 | 0.7509 |
| 0.3537 | 4.7619 | 900 | 0.5991 | 0.3840 | 0.5991 | 0.7740 |
| 0.3537 | 4.7725 | 902 | 0.6350 | 0.3675 | 0.6350 | 0.7969 |
| 0.3537 | 4.7831 | 904 | 0.6736 | 0.3407 | 0.6736 | 0.8207 |
| 0.3537 | 4.7937 | 906 | 0.7379 | 0.3585 | 0.7379 | 0.8590 |
| 0.3537 | 4.8042 | 908 | 0.7748 | 0.3923 | 0.7748 | 0.8802 |
| 0.3537 | 4.8148 | 910 | 0.7646 | 0.3923 | 0.7646 | 0.8744 |
| 0.3537 | 4.8254 | 912 | 0.7148 | 0.3585 | 0.7148 | 0.8455 |
| 0.3537 | 4.8360 | 914 | 0.6663 | 0.3266 | 0.6663 | 0.8162 |
| 0.3537 | 4.8466 | 916 | 0.6388 | 0.3685 | 0.6388 | 0.7993 |
| 0.3537 | 4.8571 | 918 | 0.6491 | 0.4035 | 0.6491 | 0.8057 |
| 0.3537 | 4.8677 | 920 | 0.6703 | 0.4026 | 0.6703 | 0.8187 |
| 0.3537 | 4.8783 | 922 | 0.6774 | 0.4172 | 0.6774 | 0.8231 |
| 0.3537 | 4.8889 | 924 | 0.6948 | 0.3742 | 0.6948 | 0.8335 |
| 0.3537 | 4.8995 | 926 | 0.6973 | 0.4172 | 0.6973 | 0.8350 |
| 0.3537 | 4.9101 | 928 | 0.6871 | 0.4011 | 0.6871 | 0.8289 |
| 0.3537 | 4.9206 | 930 | 0.6805 | 0.4011 | 0.6805 | 0.8249 |
| 0.3537 | 4.9312 | 932 | 0.6881 | 0.4161 | 0.6881 | 0.8295 |
| 0.3537 | 4.9418 | 934 | 0.6637 | 0.3695 | 0.6637 | 0.8147 |
| 0.3537 | 4.9524 | 936 | 0.6322 | 0.3659 | 0.6322 | 0.7951 |
| 0.3537 | 4.9630 | 938 | 0.6150 | 0.2995 | 0.6150 | 0.7842 |
| 0.3537 | 4.9735 | 940 | 0.5966 | 0.2842 | 0.5966 | 0.7724 |
| 0.3537 | 4.9841 | 942 | 0.5956 | 0.2830 | 0.5956 | 0.7717 |
| 0.3537 | 4.9947 | 944 | 0.5947 | 0.3303 | 0.5947 | 0.7712 |
| 0.3537 | 5.0053 | 946 | 0.6049 | 0.3457 | 0.6049 | 0.7777 |
| 0.3537 | 5.0159 | 948 | 0.6083 | 0.3457 | 0.6083 | 0.7799 |
| 0.3537 | 5.0265 | 950 | 0.6251 | 0.3292 | 0.6251 | 0.7906 |
| 0.3537 | 5.0370 | 952 | 0.6363 | 0.3292 | 0.6363 | 0.7977 |
| 0.3537 | 5.0476 | 954 | 0.6536 | 0.3446 | 0.6536 | 0.8084 |
| 0.3537 | 5.0582 | 956 | 0.6817 | 0.2795 | 0.6817 | 0.8257 |
| 0.3537 | 5.0688 | 958 | 0.6923 | 0.3168 | 0.6923 | 0.8321 |
| 0.3537 | 5.0794 | 960 | 0.6851 | 0.3977 | 0.6851 | 0.8277 |
| 0.3537 | 5.0899 | 962 | 0.6962 | 0.3977 | 0.6962 | 0.8344 |
| 0.3537 | 5.1005 | 964 | 0.6881 | 0.3977 | 0.6881 | 0.8295 |
| 0.3537 | 5.1111 | 966 | 0.6663 | 0.3639 | 0.6663 | 0.8163 |
| 0.3537 | 5.1217 | 968 | 0.6562 | 0.3179 | 0.6562 | 0.8101 |
| 0.3537 | 5.1323 | 970 | 0.6635 | 0.3530 | 0.6635 | 0.8146 |
| 0.3537 | 5.1429 | 972 | 0.6754 | 0.3719 | 0.6754 | 0.8218 |
| 0.3537 | 5.1534 | 974 | 0.6950 | 0.3860 | 0.6950 | 0.8337 |
| 0.3537 | 5.1640 | 976 | 0.6799 | 0.3728 | 0.6799 | 0.8246 |
| 0.3537 | 5.1746 | 978 | 0.6577 | 0.3737 | 0.6577 | 0.8110 |
| 0.3537 | 5.1852 | 980 | 0.6547 | 0.3888 | 0.6547 | 0.8091 |
| 0.3537 | 5.1958 | 982 | 0.6429 | 0.3747 | 0.6429 | 0.8018 |
| 0.3537 | 5.2063 | 984 | 0.6376 | 0.3897 | 0.6376 | 0.7985 |
| 0.3537 | 5.2169 | 986 | 0.6274 | 0.3897 | 0.6274 | 0.7921 |
| 0.3537 | 5.2275 | 988 | 0.6169 | 0.3569 | 0.6169 | 0.7854 |
| 0.3537 | 5.2381 | 990 | 0.6284 | 0.3569 | 0.6284 | 0.7927 |
| 0.3537 | 5.2487 | 992 | 0.6631 | 0.3888 | 0.6631 | 0.8143 |
| 0.3537 | 5.2593 | 994 | 0.7088 | 0.3719 | 0.7088 | 0.8419 |
| 0.3537 | 5.2698 | 996 | 0.7445 | 0.4002 | 0.7445 | 0.8629 |
| 0.3537 | 5.2804 | 998 | 0.7333 | 0.3851 | 0.7333 | 0.8563 |
| 0.068 | 5.2910 | 1000 | 0.6968 | 0.3530 | 0.6968 | 0.8348 |
| 0.068 | 5.3016 | 1002 | 0.6527 | 0.3030 | 0.6527 | 0.8079 |
| 0.068 | 5.3122 | 1004 | 0.6270 | 0.2677 | 0.6270 | 0.7918 |
| 0.068 | 5.3228 | 1006 | 0.6046 | 0.3006 | 0.6046 | 0.7776 |
| 0.068 | 5.3333 | 1008 | 0.5912 | 0.2853 | 0.5912 | 0.7689 |
| 0.068 | 5.3439 | 1010 | 0.5876 | 0.3540 | 0.5876 | 0.7666 |
| 0.068 | 5.3545 | 1012 | 0.5979 | 0.3006 | 0.5979 | 0.7732 |
| 0.068 | 5.3651 | 1014 | 0.6061 | 0.3211 | 0.6061 | 0.7785 |
| 0.068 | 5.3757 | 1016 | 0.6262 | 0.2583 | 0.6262 | 0.7913 |
| 0.068 | 5.3862 | 1018 | 0.6413 | 0.3840 | 0.6413 | 0.8008 |
| 0.068 | 5.3968 | 1020 | 0.6527 | 0.3840 | 0.6527 | 0.8079 |
| 0.068 | 5.4074 | 1022 | 0.6759 | 0.4161 | 0.6759 | 0.8221 |
| 0.068 | 5.4180 | 1024 | 0.6892 | 0.3860 | 0.6892 | 0.8302 |
| 0.068 | 5.4286 | 1026 | 0.6880 | 0.3860 | 0.6880 | 0.8295 |
| 0.068 | 5.4392 | 1028 | 0.6637 | 0.3685 | 0.6637 | 0.8147 |
| 0.068 | 5.4497 | 1030 | 0.6420 | 0.3840 | 0.6420 | 0.8012 |
| 0.068 | 5.4603 | 1032 | 0.6298 | 0.3840 | 0.6298 | 0.7936 |
| 0.068 | 5.4709 | 1034 | 0.6514 | 0.3685 | 0.6514 | 0.8071 |
| 0.068 | 5.4815 | 1036 | 0.6640 | 0.3685 | 0.6640 | 0.8149 |
| 0.068 | 5.4921 | 1038 | 0.6914 | 0.3860 | 0.6914 | 0.8315 |
| 0.068 | 5.5026 | 1040 | 0.7028 | 0.3218 | 0.7028 | 0.8384 |
| 0.068 | 5.5132 | 1042 | 0.7242 | 0.3364 | 0.7242 | 0.8510 |
| 0.068 | 5.5238 | 1044 | 0.7272 | 0.3700 | 0.7272 | 0.8527 |
| 0.068 | 5.5344 | 1046 | 0.7074 | 0.3700 | 0.7074 | 0.8411 |
| 0.068 | 5.5450 | 1048 | 0.6934 | 0.3700 | 0.6934 | 0.8327 |
| 0.068 | 5.5556 | 1050 | 0.6841 | 0.3700 | 0.6841 | 0.8271 |
| 0.068 | 5.5661 | 1052 | 0.6656 | 0.3374 | 0.6656 | 0.8158 |
| 0.068 | 5.5767 | 1054 | 0.6402 | 0.3685 | 0.6402 | 0.8001 |
| 0.068 | 5.5873 | 1056 | 0.6192 | 0.3190 | 0.6192 | 0.7869 |
| 0.068 | 5.5979 | 1058 | 0.6117 | 0.3190 | 0.6117 | 0.7821 |
| 0.068 | 5.6085 | 1060 | 0.6168 | 0.3190 | 0.6168 | 0.7854 |
| 0.068 | 5.6190 | 1062 | 0.6258 | 0.3530 | 0.6258 | 0.7911 |
| 0.068 | 5.6296 | 1064 | 0.6531 | 0.3520 | 0.6531 | 0.8081 |
| 0.068 | 5.6402 | 1066 | 0.6774 | 0.3851 | 0.6774 | 0.8231 |
| 0.068 | 5.6508 | 1068 | 0.6952 | 0.3700 | 0.6952 | 0.8338 |
| 0.068 | 5.6614 | 1070 | 0.7203 | 0.3700 | 0.7203 | 0.8487 |
| 0.068 | 5.6720 | 1072 | 0.7086 | 0.3700 | 0.7086 | 0.8418 |
| 0.068 | 5.6825 | 1074 | 0.6826 | 0.4610 | 0.6826 | 0.8262 |
| 0.068 | 5.6931 | 1076 | 0.6665 | 0.4618 | 0.6665 | 0.8164 |
| 0.068 | 5.7037 | 1078 | 0.6649 | 0.4618 | 0.6649 | 0.8154 |
| 0.068 | 5.7143 | 1080 | 0.6437 | 0.4335 | 0.6437 | 0.8023 |
| 0.068 | 5.7249 | 1082 | 0.6334 | 0.4179 | 0.6334 | 0.7959 |
| 0.068 | 5.7354 | 1084 | 0.6521 | 0.4756 | 0.6521 | 0.8075 |
| 0.068 | 5.7460 | 1086 | 0.6861 | 0.4476 | 0.6861 | 0.8283 |
| 0.068 | 5.7566 | 1088 | 0.7312 | 0.3931 | 0.7312 | 0.8551 |
| 0.068 | 5.7672 | 1090 | 0.7597 | 0.3820 | 0.7597 | 0.8716 |
| 0.068 | 5.7778 | 1092 | 0.7460 | 0.3792 | 0.7460 | 0.8637 |
| 0.068 | 5.7884 | 1094 | 0.7123 | 0.3906 | 0.7123 | 0.8440 |
| 0.068 | 5.7989 | 1096 | 0.6769 | 0.3831 | 0.6769 | 0.8228 |
| 0.068 | 5.8095 | 1098 | 0.6466 | 0.3339 | 0.6466 | 0.8041 |
| 0.068 | 5.8201 | 1100 | 0.6283 | 0.4004 | 0.6283 | 0.7927 |
| 0.068 | 5.8307 | 1102 | 0.6332 | 0.3138 | 0.6332 | 0.7957 |
| 0.068 | 5.8413 | 1104 | 0.6462 | 0.3138 | 0.6462 | 0.8038 |
| 0.068 | 5.8519 | 1106 | 0.6550 | 0.2807 | 0.6550 | 0.8093 |
| 0.068 | 5.8624 | 1108 | 0.6530 | 0.2807 | 0.6530 | 0.8081 |
| 0.068 | 5.8730 | 1110 | 0.6434 | 0.2807 | 0.6434 | 0.8021 |
| 0.068 | 5.8836 | 1112 | 0.6211 | 0.3659 | 0.6211 | 0.7881 |
| 0.068 | 5.8942 | 1114 | 0.6119 | 0.3360 | 0.6119 | 0.7822 |
| 0.068 | 5.9048 | 1116 | 0.6164 | 0.3041 | 0.6164 | 0.7851 |
| 0.068 | 5.9153 | 1118 | 0.6281 | 0.3540 | 0.6281 | 0.7925 |
| 0.068 | 5.9259 | 1120 | 0.6345 | 0.3384 | 0.6345 | 0.7966 |
| 0.068 | 5.9365 | 1122 | 0.6346 | 0.3239 | 0.6346 | 0.7966 |
| 0.068 | 5.9471 | 1124 | 0.6177 | 0.3041 | 0.6177 | 0.7859 |
| 0.068 | 5.9577 | 1126 | 0.6026 | 0.3052 | 0.6026 | 0.7763 |
| 0.068 | 5.9683 | 1128 | 0.6083 | 0.3052 | 0.6083 | 0.7800 |
| 0.068 | 5.9788 | 1130 | 0.6145 | 0.3859 | 0.6145 | 0.7839 |
| 0.068 | 5.9894 | 1132 | 0.6217 | 0.4004 | 0.6217 | 0.7885 |
| 0.068 | 6.0 | 1134 | 0.6335 | 0.3549 | 0.6335 | 0.7959 |
| 0.068 | 6.0106 | 1136 | 0.6506 | 0.4311 | 0.6506 | 0.8066 |
| 0.068 | 6.0212 | 1138 | 0.6578 | 0.4311 | 0.6578 | 0.8110 |
| 0.068 | 6.0317 | 1140 | 0.6525 | 0.4753 | 0.6525 | 0.8078 |
| 0.068 | 6.0423 | 1142 | 0.6447 | 0.4320 | 0.6447 | 0.8030 |
| 0.068 | 6.0529 | 1144 | 0.6335 | 0.4328 | 0.6335 | 0.7959 |
| 0.068 | 6.0635 | 1146 | 0.6168 | 0.4013 | 0.6168 | 0.7854 |
| 0.068 | 6.0741 | 1148 | 0.6062 | 0.4013 | 0.6062 | 0.7786 |
| 0.068 | 6.0847 | 1150 | 0.6033 | 0.3679 | 0.6033 | 0.7768 |
| 0.068 | 6.0952 | 1152 | 0.6111 | 0.3679 | 0.6111 | 0.7817 |
| 0.068 | 6.1058 | 1154 | 0.6301 | 0.3549 | 0.6301 | 0.7938 |
| 0.068 | 6.1164 | 1156 | 0.6556 | 0.3719 | 0.6556 | 0.8097 |
| 0.068 | 6.1270 | 1158 | 0.6754 | 0.4002 | 0.6754 | 0.8218 |
| 0.068 | 6.1376 | 1160 | 0.6819 | 0.4002 | 0.6819 | 0.8258 |
| 0.068 | 6.1481 | 1162 | 0.6735 | 0.4002 | 0.6735 | 0.8207 |
| 0.068 | 6.1587 | 1164 | 0.6550 | 0.4002 | 0.6550 | 0.8093 |
| 0.068 | 6.1693 | 1166 | 0.6417 | 0.4002 | 0.6417 | 0.8010 |
| 0.068 | 6.1799 | 1168 | 0.6354 | 0.4002 | 0.6354 | 0.7971 |
| 0.068 | 6.1905 | 1170 | 0.6479 | 0.4002 | 0.6479 | 0.8050 |
| 0.068 | 6.2011 | 1172 | 0.6823 | 0.4002 | 0.6823 | 0.8260 |
| 0.068 | 6.2116 | 1174 | 0.7213 | 0.4326 | 0.7213 | 0.8493 |
| 0.068 | 6.2222 | 1176 | 0.7379 | 0.4326 | 0.7379 | 0.8590 |
| 0.068 | 6.2328 | 1178 | 0.7424 | 0.4326 | 0.7424 | 0.8616 |
| 0.068 | 6.2434 | 1180 | 0.7372 | 0.4326 | 0.7372 | 0.8586 |
| 0.068 | 6.2540 | 1182 | 0.7161 | 0.4326 | 0.7161 | 0.8462 |
| 0.068 | 6.2646 | 1184 | 0.6908 | 0.4326 | 0.6908 | 0.8311 |
| 0.068 | 6.2751 | 1186 | 0.6650 | 0.4002 | 0.6650 | 0.8155 |
| 0.068 | 6.2857 | 1188 | 0.6619 | 0.4002 | 0.6619 | 0.8136 |
| 0.068 | 6.2963 | 1190 | 0.6761 | 0.4002 | 0.6761 | 0.8223 |
| 0.068 | 6.3069 | 1192 | 0.6944 | 0.4311 | 0.6944 | 0.8333 |
| 0.068 | 6.3175 | 1194 | 0.7262 | 0.4326 | 0.7262 | 0.8522 |
| 0.068 | 6.3280 | 1196 | 0.7687 | 0.3763 | 0.7687 | 0.8768 |
| 0.068 | 6.3386 | 1198 | 0.7779 | 0.3763 | 0.7779 | 0.8820 |
| 0.068 | 6.3492 | 1200 | 0.7600 | 0.3732 | 0.7600 | 0.8718 |
| 0.068 | 6.3598 | 1202 | 0.7259 | 0.3732 | 0.7259 | 0.8520 |
| 0.068 | 6.3704 | 1204 | 0.6963 | 0.3700 | 0.6963 | 0.8345 |
| 0.068 | 6.3810 | 1206 | 0.6615 | 0.4002 | 0.6615 | 0.8133 |
| 0.068 | 6.3915 | 1208 | 0.6434 | 0.4002 | 0.6434 | 0.8021 |
| 0.068 | 6.4021 | 1210 | 0.6418 | 0.4002 | 0.6418 | 0.8011 |
| 0.068 | 6.4127 | 1212 | 0.6398 | 0.4153 | 0.6398 | 0.7999 |
| 0.068 | 6.4233 | 1214 | 0.6487 | 0.4002 | 0.6487 | 0.8054 |
| 0.068 | 6.4339 | 1216 | 0.6649 | 0.4002 | 0.6649 | 0.8154 |
| 0.068 | 6.4444 | 1218 | 0.6652 | 0.4002 | 0.6652 | 0.8156 |
| 0.068 | 6.4550 | 1220 | 0.6617 | 0.4002 | 0.6617 | 0.8134 |
| 0.068 | 6.4656 | 1222 | 0.6664 | 0.4002 | 0.6664 | 0.8164 |
| 0.068 | 6.4762 | 1224 | 0.6626 | 0.4002 | 0.6626 | 0.8140 |
| 0.068 | 6.4868 | 1226 | 0.6563 | 0.4002 | 0.6563 | 0.8101 |
| 0.068 | 6.4974 | 1228 | 0.6399 | 0.4002 | 0.6399 | 0.8000 |
| 0.068 | 6.5079 | 1230 | 0.6345 | 0.3675 | 0.6345 | 0.7966 |
| 0.068 | 6.5185 | 1232 | 0.6320 | 0.3675 | 0.6320 | 0.7950 |
| 0.068 | 6.5291 | 1234 | 0.6171 | 0.3540 | 0.6171 | 0.7855 |
| 0.068 | 6.5397 | 1236 | 0.6116 | 0.3695 | 0.6116 | 0.7821 |
| 0.068 | 6.5503 | 1238 | 0.6156 | 0.3695 | 0.6156 | 0.7846 |
| 0.068 | 6.5608 | 1240 | 0.6257 | 0.3831 | 0.6257 | 0.7910 |
| 0.068 | 6.5714 | 1242 | 0.6278 | 0.3831 | 0.6278 | 0.7923 |
| 0.068 | 6.5820 | 1244 | 0.6164 | 0.3831 | 0.6164 | 0.7851 |
| 0.068 | 6.5926 | 1246 | 0.6157 | 0.3840 | 0.6157 | 0.7847 |
| 0.068 | 6.6032 | 1248 | 0.6142 | 0.3840 | 0.6142 | 0.7837 |
| 0.068 | 6.6138 | 1250 | 0.6148 | 0.4465 | 0.6148 | 0.7841 |
| 0.068 | 6.6243 | 1252 | 0.6193 | 0.4603 | 0.6193 | 0.7870 |
| 0.068 | 6.6349 | 1254 | 0.6189 | 0.4603 | 0.6189 | 0.7867 |
| 0.068 | 6.6455 | 1256 | 0.6256 | 0.4603 | 0.6256 | 0.7909 |
| 0.068 | 6.6561 | 1258 | 0.6241 | 0.4603 | 0.6241 | 0.7900 |
| 0.068 | 6.6667 | 1260 | 0.6263 | 0.4002 | 0.6263 | 0.7914 |
| 0.068 | 6.6772 | 1262 | 0.6244 | 0.3675 | 0.6244 | 0.7902 |
| 0.068 | 6.6878 | 1264 | 0.6203 | 0.3986 | 0.6203 | 0.7876 |
| 0.068 | 6.6984 | 1266 | 0.6204 | 0.3986 | 0.6204 | 0.7876 |
| 0.068 | 6.7090 | 1268 | 0.6256 | 0.3986 | 0.6256 | 0.7910 |
| 0.068 | 6.7196 | 1270 | 0.6406 | 0.3986 | 0.6406 | 0.8004 |
| 0.068 | 6.7302 | 1272 | 0.6507 | 0.3831 | 0.6507 | 0.8067 |
| 0.068 | 6.7407 | 1274 | 0.6642 | 0.3008 | 0.6642 | 0.8150 |
| 0.068 | 6.7513 | 1276 | 0.6650 | 0.3008 | 0.6650 | 0.8155 |
| 0.068 | 6.7619 | 1278 | 0.6719 | 0.3008 | 0.6719 | 0.8197 |
| 0.068 | 6.7725 | 1280 | 0.6713 | 0.3008 | 0.6713 | 0.8193 |
| 0.068 | 6.7831 | 1282 | 0.6681 | 0.3008 | 0.6681 | 0.8174 |
| 0.068 | 6.7937 | 1284 | 0.6691 | 0.3008 | 0.6691 | 0.8180 |
| 0.068 | 6.8042 | 1286 | 0.6746 | 0.3008 | 0.6746 | 0.8214 |
| 0.068 | 6.8148 | 1288 | 0.6774 | 0.3008 | 0.6774 | 0.8230 |
| 0.068 | 6.8254 | 1290 | 0.6778 | 0.3008 | 0.6778 | 0.8233 |
| 0.068 | 6.8360 | 1292 | 0.6877 | 0.3364 | 0.6877 | 0.8293 |
| 0.068 | 6.8466 | 1294 | 0.6777 | 0.4153 | 0.6777 | 0.8233 |
| 0.068 | 6.8571 | 1296 | 0.6640 | 0.4153 | 0.6640 | 0.8148 |
| 0.068 | 6.8677 | 1298 | 0.6536 | 0.4303 | 0.6536 | 0.8085 |
| 0.068 | 6.8783 | 1300 | 0.6363 | 0.4311 | 0.6363 | 0.7977 |
| 0.068 | 6.8889 | 1302 | 0.6206 | 0.4170 | 0.6206 | 0.7878 |
| 0.068 | 6.8995 | 1304 | 0.6076 | 0.3859 | 0.6076 | 0.7795 |
| 0.068 | 6.9101 | 1306 | 0.6086 | 0.4013 | 0.6086 | 0.7802 |
| 0.068 | 6.9206 | 1308 | 0.6171 | 0.4320 | 0.6171 | 0.7856 |
| 0.068 | 6.9312 | 1310 | 0.6207 | 0.4461 | 0.6207 | 0.7878 |
| 0.068 | 6.9418 | 1312 | 0.6231 | 0.4311 | 0.6231 | 0.7894 |
| 0.068 | 6.9524 | 1314 | 0.6346 | 0.4311 | 0.6346 | 0.7966 |
| 0.068 | 6.9630 | 1316 | 0.6559 | 0.4181 | 0.6559 | 0.8099 |
| 0.068 | 6.9735 | 1318 | 0.6616 | 0.4181 | 0.6616 | 0.8134 |
| 0.068 | 6.9841 | 1320 | 0.6639 | 0.4181 | 0.6639 | 0.8148 |
| 0.068 | 6.9947 | 1322 | 0.6648 | 0.4181 | 0.6648 | 0.8153 |
| 0.068 | 7.0053 | 1324 | 0.6654 | 0.4181 | 0.6654 | 0.8157 |
| 0.068 | 7.0159 | 1326 | 0.6594 | 0.4181 | 0.6594 | 0.8120 |
| 0.068 | 7.0265 | 1328 | 0.6480 | 0.3869 | 0.6480 | 0.8050 |
| 0.068 | 7.0370 | 1330 | 0.6332 | 0.3869 | 0.6332 | 0.7957 |
| 0.068 | 7.0476 | 1332 | 0.6277 | 0.3869 | 0.6277 | 0.7923 |
| 0.068 | 7.0582 | 1334 | 0.6215 | 0.3869 | 0.6215 | 0.7884 |
| 0.068 | 7.0688 | 1336 | 0.6188 | 0.3869 | 0.6188 | 0.7866 |
| 0.068 | 7.0794 | 1338 | 0.6142 | 0.3869 | 0.6142 | 0.7837 |
| 0.068 | 7.0899 | 1340 | 0.6087 | 0.3869 | 0.6087 | 0.7802 |
| 0.068 | 7.1005 | 1342 | 0.6109 | 0.3869 | 0.6109 | 0.7816 |
| 0.068 | 7.1111 | 1344 | 0.6107 | 0.3719 | 0.6107 | 0.7815 |
| 0.068 | 7.1217 | 1346 | 0.6084 | 0.3869 | 0.6084 | 0.7800 |
| 0.068 | 7.1323 | 1348 | 0.6092 | 0.3719 | 0.6092 | 0.7805 |
| 0.068 | 7.1429 | 1350 | 0.6154 | 0.3719 | 0.6154 | 0.7845 |
| 0.068 | 7.1534 | 1352 | 0.6239 | 0.3719 | 0.6239 | 0.7899 |
| 0.068 | 7.1640 | 1354 | 0.6314 | 0.3364 | 0.6314 | 0.7946 |
| 0.068 | 7.1746 | 1356 | 0.6343 | 0.3364 | 0.6343 | 0.7964 |
| 0.068 | 7.1852 | 1358 | 0.6368 | 0.3364 | 0.6368 | 0.7980 |
| 0.068 | 7.1958 | 1360 | 0.6358 | 0.3364 | 0.6358 | 0.7974 |
| 0.068 | 7.2063 | 1362 | 0.6365 | 0.3364 | 0.6365 | 0.7978 |
| 0.068 | 7.2169 | 1364 | 0.6333 | 0.3364 | 0.6333 | 0.7958 |
| 0.068 | 7.2275 | 1366 | 0.6310 | 0.3364 | 0.6310 | 0.7943 |
| 0.068 | 7.2381 | 1368 | 0.6292 | 0.3364 | 0.6292 | 0.7932 |
| 0.068 | 7.2487 | 1370 | 0.6352 | 0.3520 | 0.6352 | 0.7970 |
| 0.068 | 7.2593 | 1372 | 0.6411 | 0.3851 | 0.6411 | 0.8007 |
| 0.068 | 7.2698 | 1374 | 0.6423 | 0.3851 | 0.6423 | 0.8015 |
| 0.068 | 7.2804 | 1376 | 0.6403 | 0.3851 | 0.6403 | 0.8002 |
| 0.068 | 7.2910 | 1378 | 0.6433 | 0.3851 | 0.6433 | 0.8021 |
| 0.068 | 7.3016 | 1380 | 0.6432 | 0.3851 | 0.6432 | 0.8020 |
| 0.068 | 7.3122 | 1382 | 0.6432 | 0.3851 | 0.6432 | 0.8020 |
| 0.068 | 7.3228 | 1384 | 0.6514 | 0.3851 | 0.6514 | 0.8071 |
| 0.068 | 7.3333 | 1386 | 0.6715 | 0.3851 | 0.6715 | 0.8194 |
| 0.068 | 7.3439 | 1388 | 0.7023 | 0.3700 | 0.7023 | 0.8380 |
| 0.068 | 7.3545 | 1390 | 0.7325 | 0.3732 | 0.7325 | 0.8558 |
| 0.068 | 7.3651 | 1392 | 0.7418 | 0.3732 | 0.7418 | 0.8613 |
| 0.068 | 7.3757 | 1394 | 0.7313 | 0.3732 | 0.7313 | 0.8551 |
| 0.068 | 7.3862 | 1396 | 0.7062 | 0.3700 | 0.7062 | 0.8404 |
| 0.068 | 7.3968 | 1398 | 0.6893 | 0.3700 | 0.6893 | 0.8302 |
| 0.068 | 7.4074 | 1400 | 0.6744 | 0.3700 | 0.6744 | 0.8212 |
| 0.068 | 7.4180 | 1402 | 0.6566 | 0.3700 | 0.6566 | 0.8103 |
| 0.068 | 7.4286 | 1404 | 0.6430 | 0.3364 | 0.6430 | 0.8019 |
| 0.068 | 7.4392 | 1406 | 0.6316 | 0.3520 | 0.6316 | 0.7948 |
| 0.068 | 7.4497 | 1408 | 0.6242 | 0.3520 | 0.6242 | 0.7901 |
| 0.068 | 7.4603 | 1410 | 0.6127 | 0.4132 | 0.6127 | 0.7827 |
| 0.068 | 7.4709 | 1412 | 0.6024 | 0.4132 | 0.6024 | 0.7761 |
| 0.068 | 7.4815 | 1414 | 0.5957 | 0.4132 | 0.5957 | 0.7718 |
| 0.068 | 7.4921 | 1416 | 0.5963 | 0.4132 | 0.5963 | 0.7722 |
| 0.068 | 7.5026 | 1418 | 0.6050 | 0.4132 | 0.6050 | 0.7778 |
| 0.068 | 7.5132 | 1420 | 0.6178 | 0.4132 | 0.6178 | 0.7860 |
| 0.068 | 7.5238 | 1422 | 0.6330 | 0.3520 | 0.6330 | 0.7956 |
| 0.068 | 7.5344 | 1424 | 0.6526 | 0.3520 | 0.6526 | 0.8078 |
| 0.068 | 7.5450 | 1426 | 0.6694 | 0.3364 | 0.6694 | 0.8181 |
| 0.068 | 7.5556 | 1428 | 0.6708 | 0.3364 | 0.6708 | 0.8191 |
| 0.068 | 7.5661 | 1430 | 0.6793 | 0.3364 | 0.6793 | 0.8242 |
| 0.068 | 7.5767 | 1432 | 0.6893 | 0.3700 | 0.6893 | 0.8302 |
| 0.068 | 7.5873 | 1434 | 0.6933 | 0.3700 | 0.6933 | 0.8326 |
| 0.068 | 7.5979 | 1436 | 0.7007 | 0.3549 | 0.7007 | 0.8371 |
| 0.068 | 7.6085 | 1438 | 0.6972 | 0.3549 | 0.6972 | 0.8350 |
| 0.068 | 7.6190 | 1440 | 0.6790 | 0.3700 | 0.6790 | 0.8240 |
| 0.068 | 7.6296 | 1442 | 0.6555 | 0.3700 | 0.6555 | 0.8096 |
| 0.068 | 7.6402 | 1444 | 0.6298 | 0.3520 | 0.6298 | 0.7936 |
| 0.068 | 7.6508 | 1446 | 0.6181 | 0.4132 | 0.6181 | 0.7862 |
| 0.068 | 7.6614 | 1448 | 0.6176 | 0.4445 | 0.6176 | 0.7859 |
| 0.068 | 7.6720 | 1450 | 0.6265 | 0.4445 | 0.6265 | 0.7915 |
| 0.068 | 7.6825 | 1452 | 0.6364 | 0.4002 | 0.6364 | 0.7978 |
| 0.068 | 7.6931 | 1454 | 0.6454 | 0.4002 | 0.6454 | 0.8034 |
| 0.068 | 7.7037 | 1456 | 0.6656 | 0.3700 | 0.6656 | 0.8159 |
| 0.068 | 7.7143 | 1458 | 0.6828 | 0.3763 | 0.6828 | 0.8263 |
| 0.068 | 7.7249 | 1460 | 0.6921 | 0.3763 | 0.6921 | 0.8319 |
| 0.068 | 7.7354 | 1462 | 0.6903 | 0.3763 | 0.6903 | 0.8308 |
| 0.068 | 7.7460 | 1464 | 0.6685 | 0.3732 | 0.6685 | 0.8176 |
| 0.068 | 7.7566 | 1466 | 0.6365 | 0.3700 | 0.6365 | 0.7978 |
| 0.068 | 7.7672 | 1468 | 0.6188 | 0.3851 | 0.6188 | 0.7866 |
| 0.068 | 7.7778 | 1470 | 0.6155 | 0.3851 | 0.6155 | 0.7845 |
| 0.068 | 7.7884 | 1472 | 0.6224 | 0.3364 | 0.6224 | 0.7889 |
| 0.068 | 7.7989 | 1474 | 0.6401 | 0.3700 | 0.6401 | 0.8000 |
| 0.068 | 7.8095 | 1476 | 0.6546 | 0.3700 | 0.6546 | 0.8090 |
| 0.068 | 7.8201 | 1478 | 0.6678 | 0.3700 | 0.6678 | 0.8172 |
| 0.068 | 7.8307 | 1480 | 0.6671 | 0.3700 | 0.6671 | 0.8168 |
| 0.068 | 7.8413 | 1482 | 0.6652 | 0.3700 | 0.6652 | 0.8156 |
| 0.068 | 7.8519 | 1484 | 0.6625 | 0.3700 | 0.6625 | 0.8139 |
| 0.068 | 7.8624 | 1486 | 0.6503 | 0.3364 | 0.6503 | 0.8064 |
| 0.068 | 7.8730 | 1488 | 0.6417 | 0.3364 | 0.6417 | 0.8011 |
| 0.068 | 7.8836 | 1490 | 0.6258 | 0.3008 | 0.6258 | 0.7911 |
| 0.068 | 7.8942 | 1492 | 0.6084 | 0.3329 | 0.6084 | 0.7800 |
| 0.068 | 7.9048 | 1494 | 0.5995 | 0.3329 | 0.5995 | 0.7743 |
| 0.068 | 7.9153 | 1496 | 0.5926 | 0.3329 | 0.5926 | 0.7698 |
| 0.068 | 7.9259 | 1498 | 0.5912 | 0.3329 | 0.5912 | 0.7689 |
| 0.0477 | 7.9365 | 1500 | 0.5915 | 0.3799 | 0.5915 | 0.7691 |
| 0.0477 | 7.9471 | 1502 | 0.5889 | 0.3799 | 0.5889 | 0.7674 |
| 0.0477 | 7.9577 | 1504 | 0.5862 | 0.3799 | 0.5862 | 0.7657 |
| 0.0477 | 7.9683 | 1506 | 0.5870 | 0.3799 | 0.5870 | 0.7662 |
| 0.0477 | 7.9788 | 1508 | 0.5926 | 0.3799 | 0.5926 | 0.7698 |
| 0.0477 | 7.9894 | 1510 | 0.6015 | 0.3329 | 0.6015 | 0.7756 |
| 0.0477 | 8.0 | 1512 | 0.6179 | 0.3008 | 0.6179 | 0.7860 |
| 0.0477 | 8.0106 | 1514 | 0.6266 | 0.3008 | 0.6266 | 0.7916 |
| 0.0477 | 8.0212 | 1516 | 0.6352 | 0.3008 | 0.6352 | 0.7970 |
| 0.0477 | 8.0317 | 1518 | 0.6364 | 0.3008 | 0.6364 | 0.7978 |
| 0.0477 | 8.0423 | 1520 | 0.6304 | 0.3008 | 0.6304 | 0.7940 |
| 0.0477 | 8.0529 | 1522 | 0.6283 | 0.3168 | 0.6283 | 0.7926 |
| 0.0477 | 8.0635 | 1524 | 0.6321 | 0.3168 | 0.6321 | 0.7950 |
| 0.0477 | 8.0741 | 1526 | 0.6369 | 0.3008 | 0.6369 | 0.7981 |
| 0.0477 | 8.0847 | 1528 | 0.6435 | 0.3008 | 0.6435 | 0.8022 |
| 0.0477 | 8.0952 | 1530 | 0.6518 | 0.3364 | 0.6518 | 0.8073 |
| 0.0477 | 8.1058 | 1532 | 0.6625 | 0.3364 | 0.6625 | 0.8139 |
| 0.0477 | 8.1164 | 1534 | 0.6636 | 0.3364 | 0.6636 | 0.8146 |
| 0.0477 | 8.1270 | 1536 | 0.6627 | 0.3364 | 0.6627 | 0.8140 |
| 0.0477 | 8.1376 | 1538 | 0.6644 | 0.3364 | 0.6644 | 0.8151 |
| 0.0477 | 8.1481 | 1540 | 0.6734 | 0.3364 | 0.6734 | 0.8206 |
| 0.0477 | 8.1587 | 1542 | 0.6873 | 0.3364 | 0.6873 | 0.8290 |
| 0.0477 | 8.1693 | 1544 | 0.6932 | 0.3364 | 0.6932 | 0.8326 |
| 0.0477 | 8.1799 | 1546 | 0.6869 | 0.3364 | 0.6869 | 0.8288 |
| 0.0477 | 8.1905 | 1548 | 0.6802 | 0.3364 | 0.6802 | 0.8247 |
| 0.0477 | 8.2011 | 1550 | 0.6734 | 0.3364 | 0.6734 | 0.8206 |
| 0.0477 | 8.2116 | 1552 | 0.6761 | 0.3364 | 0.6761 | 0.8223 |
| 0.0477 | 8.2222 | 1554 | 0.6850 | 0.3364 | 0.6850 | 0.8277 |
| 0.0477 | 8.2328 | 1556 | 0.6867 | 0.3364 | 0.6867 | 0.8287 |
| 0.0477 | 8.2434 | 1558 | 0.6859 | 0.3364 | 0.6859 | 0.8282 |
| 0.0477 | 8.2540 | 1560 | 0.6784 | 0.3364 | 0.6784 | 0.8236 |
| 0.0477 | 8.2646 | 1562 | 0.6719 | 0.3364 | 0.6719 | 0.8197 |
| 0.0477 | 8.2751 | 1564 | 0.6715 | 0.3364 | 0.6715 | 0.8195 |
| 0.0477 | 8.2857 | 1566 | 0.6701 | 0.3364 | 0.6701 | 0.8186 |
| 0.0477 | 8.2963 | 1568 | 0.6732 | 0.3364 | 0.6732 | 0.8205 |
| 0.0477 | 8.3069 | 1570 | 0.6805 | 0.3364 | 0.6805 | 0.8249 |
| 0.0477 | 8.3175 | 1572 | 0.6897 | 0.3364 | 0.6897 | 0.8305 |
| 0.0477 | 8.3280 | 1574 | 0.6993 | 0.3364 | 0.6993 | 0.8363 |
| 0.0477 | 8.3386 | 1576 | 0.7050 | 0.3364 | 0.7050 | 0.8396 |
| 0.0477 | 8.3492 | 1578 | 0.6985 | 0.3364 | 0.6985 | 0.8358 |
| 0.0477 | 8.3598 | 1580 | 0.6904 | 0.3364 | 0.6904 | 0.8309 |
| 0.0477 | 8.3704 | 1582 | 0.6760 | 0.3364 | 0.6760 | 0.8222 |
| 0.0477 | 8.3810 | 1584 | 0.6612 | 0.3364 | 0.6612 | 0.8131 |
| 0.0477 | 8.3915 | 1586 | 0.6472 | 0.3168 | 0.6472 | 0.8045 |
| 0.0477 | 8.4021 | 1588 | 0.6361 | 0.3168 | 0.6361 | 0.7975 |
| 0.0477 | 8.4127 | 1590 | 0.6289 | 0.3168 | 0.6289 | 0.7930 |
| 0.0477 | 8.4233 | 1592 | 0.6195 | 0.3168 | 0.6195 | 0.7871 |
| 0.0477 | 8.4339 | 1594 | 0.6158 | 0.3639 | 0.6158 | 0.7848 |
| 0.0477 | 8.4444 | 1596 | 0.6137 | 0.3639 | 0.6137 | 0.7834 |
| 0.0477 | 8.4550 | 1598 | 0.6162 | 0.3168 | 0.6162 | 0.7850 |
| 0.0477 | 8.4656 | 1600 | 0.6243 | 0.3168 | 0.6243 | 0.7901 |
| 0.0477 | 8.4762 | 1602 | 0.6333 | 0.3168 | 0.6333 | 0.7958 |
| 0.0477 | 8.4868 | 1604 | 0.6454 | 0.3008 | 0.6454 | 0.8034 |
| 0.0477 | 8.4974 | 1606 | 0.6574 | 0.3008 | 0.6574 | 0.8108 |
| 0.0477 | 8.5079 | 1608 | 0.6654 | 0.3364 | 0.6654 | 0.8157 |
| 0.0477 | 8.5185 | 1610 | 0.6672 | 0.3364 | 0.6672 | 0.8168 |
| 0.0477 | 8.5291 | 1612 | 0.6617 | 0.3364 | 0.6617 | 0.8135 |
| 0.0477 | 8.5397 | 1614 | 0.6507 | 0.3008 | 0.6507 | 0.8066 |
| 0.0477 | 8.5503 | 1616 | 0.6441 | 0.3008 | 0.6441 | 0.8026 |
| 0.0477 | 8.5608 | 1618 | 0.6402 | 0.3008 | 0.6402 | 0.8001 |
| 0.0477 | 8.5714 | 1620 | 0.6382 | 0.3008 | 0.6382 | 0.7989 |
| 0.0477 | 8.5820 | 1622 | 0.6357 | 0.3008 | 0.6357 | 0.7973 |
| 0.0477 | 8.5926 | 1624 | 0.6366 | 0.3008 | 0.6366 | 0.7979 |
| 0.0477 | 8.6032 | 1626 | 0.6369 | 0.3008 | 0.6369 | 0.7981 |
| 0.0477 | 8.6138 | 1628 | 0.6387 | 0.3008 | 0.6387 | 0.7992 |
| 0.0477 | 8.6243 | 1630 | 0.6456 | 0.3008 | 0.6456 | 0.8035 |
| 0.0477 | 8.6349 | 1632 | 0.6550 | 0.3008 | 0.6550 | 0.8093 |
| 0.0477 | 8.6455 | 1634 | 0.6553 | 0.3364 | 0.6553 | 0.8095 |
| 0.0477 | 8.6561 | 1636 | 0.6528 | 0.3008 | 0.6528 | 0.8080 |
| 0.0477 | 8.6667 | 1638 | 0.6553 | 0.3364 | 0.6553 | 0.8095 |
| 0.0477 | 8.6772 | 1640 | 0.6562 | 0.3364 | 0.6562 | 0.8101 |
| 0.0477 | 8.6878 | 1642 | 0.6562 | 0.3364 | 0.6562 | 0.8101 |
| 0.0477 | 8.6984 | 1644 | 0.6585 | 0.3364 | 0.6585 | 0.8115 |
| 0.0477 | 8.7090 | 1646 | 0.6582 | 0.3364 | 0.6582 | 0.8113 |
| 0.0477 | 8.7196 | 1648 | 0.6586 | 0.3364 | 0.6586 | 0.8116 |
| 0.0477 | 8.7302 | 1650 | 0.6606 | 0.3364 | 0.6606 | 0.8128 |
| 0.0477 | 8.7407 | 1652 | 0.6624 | 0.3364 | 0.6624 | 0.8139 |
| 0.0477 | 8.7513 | 1654 | 0.6640 | 0.3364 | 0.6640 | 0.8149 |
| 0.0477 | 8.7619 | 1656 | 0.6639 | 0.3364 | 0.6639 | 0.8148 |
| 0.0477 | 8.7725 | 1658 | 0.6592 | 0.3364 | 0.6592 | 0.8119 |
| 0.0477 | 8.7831 | 1660 | 0.6600 | 0.3364 | 0.6600 | 0.8124 |
| 0.0477 | 8.7937 | 1662 | 0.6598 | 0.3364 | 0.6598 | 0.8123 |
| 0.0477 | 8.8042 | 1664 | 0.6605 | 0.3364 | 0.6605 | 0.8127 |
| 0.0477 | 8.8148 | 1666 | 0.6591 | 0.3364 | 0.6591 | 0.8118 |
| 0.0477 | 8.8254 | 1668 | 0.6599 | 0.3364 | 0.6599 | 0.8123 |
| 0.0477 | 8.8360 | 1670 | 0.6623 | 0.3364 | 0.6623 | 0.8138 |
| 0.0477 | 8.8466 | 1672 | 0.6623 | 0.3364 | 0.6623 | 0.8138 |
| 0.0477 | 8.8571 | 1674 | 0.6621 | 0.3364 | 0.6621 | 0.8137 |
| 0.0477 | 8.8677 | 1676 | 0.6607 | 0.3364 | 0.6607 | 0.8128 |
| 0.0477 | 8.8783 | 1678 | 0.6594 | 0.3364 | 0.6594 | 0.8120 |
| 0.0477 | 8.8889 | 1680 | 0.6557 | 0.3364 | 0.6557 | 0.8097 |
| 0.0477 | 8.8995 | 1682 | 0.6525 | 0.3520 | 0.6525 | 0.8078 |
| 0.0477 | 8.9101 | 1684 | 0.6468 | 0.3520 | 0.6468 | 0.8042 |
| 0.0477 | 8.9206 | 1686 | 0.6459 | 0.3520 | 0.6459 | 0.8037 |
| 0.0477 | 8.9312 | 1688 | 0.6506 | 0.3520 | 0.6506 | 0.8066 |
| 0.0477 | 8.9418 | 1690 | 0.6559 | 0.3851 | 0.6559 | 0.8099 |
| 0.0477 | 8.9524 | 1692 | 0.6612 | 0.3700 | 0.6612 | 0.8132 |
| 0.0477 | 8.9630 | 1694 | 0.6640 | 0.3700 | 0.6640 | 0.8149 |
| 0.0477 | 8.9735 | 1696 | 0.6662 | 0.3700 | 0.6662 | 0.8162 |
| 0.0477 | 8.9841 | 1698 | 0.6691 | 0.3700 | 0.6691 | 0.8180 |
| 0.0477 | 8.9947 | 1700 | 0.6754 | 0.3700 | 0.6754 | 0.8218 |
| 0.0477 | 9.0053 | 1702 | 0.6802 | 0.3700 | 0.6802 | 0.8248 |
| 0.0477 | 9.0159 | 1704 | 0.6845 | 0.3700 | 0.6845 | 0.8274 |
| 0.0477 | 9.0265 | 1706 | 0.6889 | 0.3700 | 0.6889 | 0.8300 |
| 0.0477 | 9.0370 | 1708 | 0.6927 | 0.3700 | 0.6927 | 0.8323 |
| 0.0477 | 9.0476 | 1710 | 0.6909 | 0.3700 | 0.6909 | 0.8312 |
| 0.0477 | 9.0582 | 1712 | 0.6840 | 0.3700 | 0.6840 | 0.8271 |
| 0.0477 | 9.0688 | 1714 | 0.6775 | 0.3700 | 0.6775 | 0.8231 |
| 0.0477 | 9.0794 | 1716 | 0.6707 | 0.3700 | 0.6707 | 0.8189 |
| 0.0477 | 9.0899 | 1718 | 0.6630 | 0.3700 | 0.6630 | 0.8142 |
| 0.0477 | 9.1005 | 1720 | 0.6566 | 0.3700 | 0.6566 | 0.8103 |
| 0.0477 | 9.1111 | 1722 | 0.6521 | 0.3364 | 0.6521 | 0.8075 |
| 0.0477 | 9.1217 | 1724 | 0.6496 | 0.3364 | 0.6496 | 0.8060 |
| 0.0477 | 9.1323 | 1726 | 0.6507 | 0.4002 | 0.6507 | 0.8067 |
| 0.0477 | 9.1429 | 1728 | 0.6513 | 0.4002 | 0.6513 | 0.8071 |
| 0.0477 | 9.1534 | 1730 | 0.6517 | 0.4002 | 0.6517 | 0.8073 |
| 0.0477 | 9.1640 | 1732 | 0.6524 | 0.4002 | 0.6524 | 0.8077 |
| 0.0477 | 9.1746 | 1734 | 0.6512 | 0.4002 | 0.6512 | 0.8070 |
| 0.0477 | 9.1852 | 1736 | 0.6471 | 0.4002 | 0.6471 | 0.8044 |
| 0.0477 | 9.1958 | 1738 | 0.6426 | 0.4153 | 0.6426 | 0.8016 |
| 0.0477 | 9.2063 | 1740 | 0.6347 | 0.4153 | 0.6347 | 0.7967 |
| 0.0477 | 9.2169 | 1742 | 0.6304 | 0.4153 | 0.6304 | 0.7940 |
| 0.0477 | 9.2275 | 1744 | 0.6263 | 0.4153 | 0.6263 | 0.7914 |
| 0.0477 | 9.2381 | 1746 | 0.6266 | 0.4153 | 0.6266 | 0.7916 |
| 0.0477 | 9.2487 | 1748 | 0.6258 | 0.4153 | 0.6258 | 0.7911 |
| 0.0477 | 9.2593 | 1750 | 0.6256 | 0.4153 | 0.6256 | 0.7909 |
| 0.0477 | 9.2698 | 1752 | 0.6260 | 0.4153 | 0.6260 | 0.7912 |
| 0.0477 | 9.2804 | 1754 | 0.6267 | 0.4153 | 0.6267 | 0.7917 |
| 0.0477 | 9.2910 | 1756 | 0.6262 | 0.4153 | 0.6262 | 0.7913 |
| 0.0477 | 9.3016 | 1758 | 0.6259 | 0.4153 | 0.6259 | 0.7911 |
| 0.0477 | 9.3122 | 1760 | 0.6269 | 0.4002 | 0.6269 | 0.7918 |
| 0.0477 | 9.3228 | 1762 | 0.6273 | 0.3364 | 0.6273 | 0.7921 |
| 0.0477 | 9.3333 | 1764 | 0.6255 | 0.4002 | 0.6255 | 0.7909 |
| 0.0477 | 9.3439 | 1766 | 0.6240 | 0.4002 | 0.6240 | 0.7899 |
| 0.0477 | 9.3545 | 1768 | 0.6238 | 0.4002 | 0.6238 | 0.7898 |
| 0.0477 | 9.3651 | 1770 | 0.6239 | 0.4002 | 0.6239 | 0.7899 |
| 0.0477 | 9.3757 | 1772 | 0.6271 | 0.4002 | 0.6271 | 0.7919 |
| 0.0477 | 9.3862 | 1774 | 0.6321 | 0.3364 | 0.6321 | 0.7950 |
| 0.0477 | 9.3968 | 1776 | 0.6352 | 0.3364 | 0.6352 | 0.7970 |
| 0.0477 | 9.4074 | 1778 | 0.6356 | 0.4002 | 0.6356 | 0.7972 |
| 0.0477 | 9.4180 | 1780 | 0.6355 | 0.4002 | 0.6355 | 0.7972 |
| 0.0477 | 9.4286 | 1782 | 0.6363 | 0.4002 | 0.6363 | 0.7977 |
| 0.0477 | 9.4392 | 1784 | 0.6361 | 0.4002 | 0.6361 | 0.7976 |
| 0.0477 | 9.4497 | 1786 | 0.6355 | 0.4002 | 0.6355 | 0.7972 |
| 0.0477 | 9.4603 | 1788 | 0.6349 | 0.4002 | 0.6349 | 0.7968 |
| 0.0477 | 9.4709 | 1790 | 0.6359 | 0.4002 | 0.6359 | 0.7974 |
| 0.0477 | 9.4815 | 1792 | 0.6372 | 0.4002 | 0.6372 | 0.7982 |
| 0.0477 | 9.4921 | 1794 | 0.6386 | 0.4002 | 0.6386 | 0.7991 |
| 0.0477 | 9.5026 | 1796 | 0.6413 | 0.4002 | 0.6413 | 0.8008 |
| 0.0477 | 9.5132 | 1798 | 0.6448 | 0.3364 | 0.6448 | 0.8030 |
| 0.0477 | 9.5238 | 1800 | 0.6479 | 0.3364 | 0.6479 | 0.8049 |
| 0.0477 | 9.5344 | 1802 | 0.6511 | 0.3364 | 0.6511 | 0.8069 |
| 0.0477 | 9.5450 | 1804 | 0.6520 | 0.3364 | 0.6520 | 0.8075 |
| 0.0477 | 9.5556 | 1806 | 0.6535 | 0.3364 | 0.6535 | 0.8084 |
| 0.0477 | 9.5661 | 1808 | 0.6548 | 0.3364 | 0.6548 | 0.8092 |
| 0.0477 | 9.5767 | 1810 | 0.6570 | 0.3364 | 0.6570 | 0.8105 |
| 0.0477 | 9.5873 | 1812 | 0.6584 | 0.3364 | 0.6584 | 0.8114 |
| 0.0477 | 9.5979 | 1814 | 0.6577 | 0.3364 | 0.6577 | 0.8110 |
| 0.0477 | 9.6085 | 1816 | 0.6557 | 0.3364 | 0.6557 | 0.8098 |
| 0.0477 | 9.6190 | 1818 | 0.6535 | 0.3364 | 0.6535 | 0.8084 |
| 0.0477 | 9.6296 | 1820 | 0.6520 | 0.3364 | 0.6520 | 0.8074 |
| 0.0477 | 9.6402 | 1822 | 0.6495 | 0.3364 | 0.6495 | 0.8059 |
| 0.0477 | 9.6508 | 1824 | 0.6482 | 0.3364 | 0.6482 | 0.8051 |
| 0.0477 | 9.6614 | 1826 | 0.6478 | 0.3364 | 0.6478 | 0.8049 |
| 0.0477 | 9.6720 | 1828 | 0.6481 | 0.3364 | 0.6481 | 0.8051 |
| 0.0477 | 9.6825 | 1830 | 0.6479 | 0.3364 | 0.6479 | 0.8049 |
| 0.0477 | 9.6931 | 1832 | 0.6485 | 0.3364 | 0.6485 | 0.8053 |
| 0.0477 | 9.7037 | 1834 | 0.6496 | 0.3364 | 0.6496 | 0.8060 |
| 0.0477 | 9.7143 | 1836 | 0.6506 | 0.3364 | 0.6506 | 0.8066 |
| 0.0477 | 9.7249 | 1838 | 0.6510 | 0.4002 | 0.6510 | 0.8068 |
| 0.0477 | 9.7354 | 1840 | 0.6519 | 0.4002 | 0.6519 | 0.8074 |
| 0.0477 | 9.7460 | 1842 | 0.6520 | 0.4002 | 0.6520 | 0.8075 |
| 0.0477 | 9.7566 | 1844 | 0.6512 | 0.4002 | 0.6512 | 0.8069 |
| 0.0477 | 9.7672 | 1846 | 0.6497 | 0.4002 | 0.6497 | 0.8060 |
| 0.0477 | 9.7778 | 1848 | 0.6479 | 0.4002 | 0.6479 | 0.8049 |
| 0.0477 | 9.7884 | 1850 | 0.6468 | 0.4002 | 0.6468 | 0.8043 |
| 0.0477 | 9.7989 | 1852 | 0.6458 | 0.4002 | 0.6458 | 0.8036 |
| 0.0477 | 9.8095 | 1854 | 0.6455 | 0.4002 | 0.6455 | 0.8034 |
| 0.0477 | 9.8201 | 1856 | 0.6458 | 0.4002 | 0.6458 | 0.8036 |
| 0.0477 | 9.8307 | 1858 | 0.6459 | 0.4002 | 0.6459 | 0.8037 |
| 0.0477 | 9.8413 | 1860 | 0.6459 | 0.4002 | 0.6459 | 0.8037 |
| 0.0477 | 9.8519 | 1862 | 0.6455 | 0.4002 | 0.6455 | 0.8034 |
| 0.0477 | 9.8624 | 1864 | 0.6454 | 0.4002 | 0.6454 | 0.8033 |
| 0.0477 | 9.8730 | 1866 | 0.6454 | 0.4002 | 0.6454 | 0.8033 |
| 0.0477 | 9.8836 | 1868 | 0.6448 | 0.4002 | 0.6448 | 0.8030 |
| 0.0477 | 9.8942 | 1870 | 0.6441 | 0.4002 | 0.6441 | 0.8026 |
| 0.0477 | 9.9048 | 1872 | 0.6436 | 0.4002 | 0.6436 | 0.8022 |
| 0.0477 | 9.9153 | 1874 | 0.6435 | 0.4002 | 0.6435 | 0.8022 |
| 0.0477 | 9.9259 | 1876 | 0.6435 | 0.4002 | 0.6435 | 0.8022 |
| 0.0477 | 9.9365 | 1878 | 0.6438 | 0.4002 | 0.6438 | 0.8024 |
| 0.0477 | 9.9471 | 1880 | 0.6439 | 0.4002 | 0.6439 | 0.8024 |
| 0.0477 | 9.9577 | 1882 | 0.6439 | 0.4002 | 0.6439 | 0.8024 |
| 0.0477 | 9.9683 | 1884 | 0.6440 | 0.4002 | 0.6440 | 0.8025 |
| 0.0477 | 9.9788 | 1886 | 0.6441 | 0.4002 | 0.6441 | 0.8025 |
| 0.0477 | 9.9894 | 1888 | 0.6441 | 0.4002 | 0.6441 | 0.8025 |
| 0.0477 | 10.0 | 1890 | 0.6441 | 0.4002 | 0.6441 | 0.8025 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/Pentesting-GPT-v1.0-GGUF | mradermacher | 2024-11-26T17:38:36Z | 255 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:frostsg/Pentesting-GPT-v1.0",
"base_model:quantized:frostsg/Pentesting-GPT-v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T16:49:12Z | ---
base_model: frostsg/Pentesting-GPT-v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/frostsg/Pentesting-GPT-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Pentesting-GPT-v1.0-GGUF/resolve/main/Pentesting-GPT-v1.0.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jeeeeeeeeeeeeeeez/my-tinyreco-model-new-data-bright8 | Jeeeeeeeeeeeeeeez | 2024-11-26T17:32:49Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T17:30:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nabushika/TheDrummer_Behemoth-123B-v2.2-2.65bpw-h6-exl2 | Nabushika | 2024-11-26T17:29:06Z | 10 | 0 | null | [
"mistral",
"base_model:TheDrummer/Behemoth-123B-v2.2",
"base_model:quantized:TheDrummer/Behemoth-123B-v2.2",
"license:other",
"exl2",
"region:us"
] | null | 2024-11-26T16:47:59Z | ---
license: other
base_model:
- TheDrummer/Behemoth-123B-v2.2
base_model_relation: quantized
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
*The finetune that made people buy another 3090...*
# Behemoth 123B v2.2 🦣
> Nothing in the void is foreign to us. The place we go is the place we belong.

## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v2.2
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v2.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v2.2-GGUF (recommended for smaller quants)
## Description
Behemoth v2.x is a finetune of the new Largestral 2411 with system prompt support. Testers have noted that **everything** felt improved.
### Usage
Testers say this frankenformat maximizes the model's potential: **Metharme** with Mistral's new system tokens
- `[SYSTEM_PROMPT] <|system|>{{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
- `<|system|>[SYSTEM_PROMPT] {{system_message}}[/SYSTEM_PROMPT]<|user|>{{user_message}}<|model|>{{assistant_message}}`
*Take note that the opening system tag SHOULD ALWAYS have a leading whitespace after it.*
Complete SillyTavern Settings in BeaverAI Club: https://discord.com/channels/1238219753324281886/1309968730301792370/1309968730301792370
Mirror: https://rentry.org/cd32disa
### Versions
- [v2.0](https://huggingface.co/TheDrummer/Behemoth-123B-v2) is equivalent to Behemoth v1.0 (Classic)
- [v2.1](https://huggingface.co/TheDrummer/Behemoth-123B-v2.1) is equivalent to Behemoth v1.1 (Creative Boost)
- [v2.2](https://huggingface.co/TheDrummer/Behemoth-123B-v2.2) is an improvement of Behemoth v2.1 (Creative++)
## Special Thanks
Thank you to each and everyone who donated/subscribed in [Ko-Fi](https://ko-fi.com/thedrummer) 🙇 I hope to never disappoint!
```
Toasty Pigeon
theguywhogamesalot
Grozi
F
Marinara
Ko-fi Supporter
Grozi
Phaelon
ONTHEREDTEAM
EvarinSharath'fe(USM-Valor)
Silva
Dakkidaze
AlexTheVP
Pseudo
Kistara
Dr. Fjut
Grozi 🥈
KinjiHakari777
dustywintr
Syd
HumbleConsumer
Syd
Ko-fi Supporter
Arkamist
joe 🥇
Toad
Lied
Konnect
Kistara
Grozi 🥉
SleepDeprived3
Luigi
Nestor
```
https://ko-fi.com/thedrummer/leaderboard
```
Finetuned by yours truly,
Drummer
```
Thank you Gargy for the GPUs!
 |
CarlosRiverMe/sd3-finetuned-aws | CarlosRiverMe | 2024-11-26T17:19:28Z | 5 | 0 | diffusers | [
"diffusers",
"sd3",
"sd3-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"standard",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2024-11-26T02:34:15Z | ---
license: other
base_model: "stabilityai/stable-diffusion-3.5-large"
tags:
- sd3
- sd3-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- standard
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'zukythedog chilling in the living room'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
---
# sd3-finetuned-aws
This is a standard PEFT LoRA derived from [stabilityai/stable-diffusion-3.5-large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large).
The main validation prompt used during training was:
```
zukythedog chilling in the living room
```
## Validation settings
- CFG: `5.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `512x512`
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 4
- Training steps: 2600
- Learning rate: 5e-05
- Learning rate schedule: polynomial
- Warmup steps: 100
- Max grad norm: 0.01
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['shift=3'])
- Optimizer: adamw_bf16
- Trainable parameter precision: Pure BF16
- Caption dropout probability: 0.0%
- LoRA Rank: 64
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### zukythedog-dataset-512
- Repeats: 5
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### zukythedog-dataset-1024
- Repeats: 5
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### zukythedog-dataset-512-crop
- Repeats: 5
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### zukythedog-dataset-1024-crop
- Repeats: 5
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'stabilityai/stable-diffusion-3.5-large'
adapter_id = 'CarlosRiverMe/sd3-finetuned-aws'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)
prompt = "zukythedog chilling in the living room"
negative_prompt = 'blurry, cropped, ugly'
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=512,
height=512,
guidance_scale=5.0,
).images[0]
image.save("output.png", format="PNG")
```
|
klaudia-firlag/distilbert-base-uncased-finetuned-sentiment5 | klaudia-firlag | 2024-11-26T17:09:45Z | 202 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T17:09:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF | mradermacher | 2024-11-26T17:07:17Z | 2,465 | 0 | transformers | [
"transformers",
"gguf",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"base_model:openGPT-X/Teuken-7B-instruct-research-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-instruct-research-v0.4",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-26T15:18:35Z | ---
base_model: openGPT-X/Teuken-7B-instruct-research-v0.4
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ1_S.gguf) | i1-IQ1_S | 2.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ1_M.gguf) | i1-IQ1_M | 2.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ2_S.gguf) | i1-IQ2_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ2_M.gguf) | i1-IQ2_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-research-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-research-v0.4.i1-Q6_K.gguf) | i1-Q6_K | 6.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
HuggingFaceTB/SmolVLM-Synthetic | HuggingFaceTB | 2024-11-26T17:04:20Z | 364 | 11 | transformers | [
"transformers",
"safetensors",
"idefics3",
"image-text-to-text",
"conversational",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-22T17:20:00Z | ---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
pipeline_tag: image-text-to-text
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-1.7B-Instruct
- google/siglip-so400m-patch14-384
---
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM.png" width="800" height="auto" alt="Image description">
# SmolVLM
SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs.
Designed for efficiency, SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images,
or function as a pure language model without visual inputs. Its lightweight architecture makes it suitable for on-device applications
while maintaining strong performance on multimodal tasks.
## Model Summary
- **Developed by:** Hugging Face 🤗
- **Model type:** Multi-modal model (image+text)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Architecture:** Based on [Idefics3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) (see technical summary)
## Resources
- **Demo:** [SmolVLM Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM)
- **Blog:** [Blog post](https://huggingface.co/blog/smolvlm)
## Uses
SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images.
Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on
visual content. The model does not support image generation.
To fine-tune SmolVLM on a specific task, you can follow the fine-tuning tutorial.
<!-- todo: add link to fine-tuning tutorial -->
### Technical Summary
SmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience.
It introduces several changes compared to previous Idefics models:
- **Image compression:** We introduce a more radical image compression compared to Idefics3 to enable the model to infer faster and use less RAM.
- **Visual Token Encoding:** SmolVLM uses 81 visual tokens to encode image patches of size 384×384. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.
More details about the training and architecture are available in our technical report.
### How to get started
You can use transformers to load, infer and fine-tune SmolVLM.
```python
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# Load images
image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
# Initialize processor and model
processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Synthetic")
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceTB/SmolVLM-Synthetic",
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
).to(DEVICE)
# Create input messages
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "text": "Can you describe the two images?"}
]
},
]
# Prepare inputs
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
inputs = inputs.to(DEVICE)
# Generate outputs
generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
print(generated_texts[0])
"""
User:<image>Can you describe the two images?
Assistant: The two images are not described in the provided facts, so we cannot provide any information about them.
"""
```
### Model optimizations
**Precision**: For better performance, load and run the model in half-precision (`torch.float16` or `torch.bfloat16`) if your hardware supports it.
```python
from transformers import AutoModelForVision2Seq
import torch
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceTB/SmolVLM-Synthetic",
torch_dtype=torch.bfloat16
).to("cuda")
```
You can also load SmolVLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to [this page](https://huggingface.co/docs/transformers/en/main_classes/quantization) for other options.
```python
from transformers import AutoModelForVision2Seq, BitsAndBytesConfig
import torch
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceTB/SmolVLM-Synthetic",
quantization_config=quantization_config,
)
```
**Vision Encoder Efficiency**: Adjust the image resolution by setting `size={"longest_edge": N*384}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of
size 1536×1536. For documents, `N=5` might be beneficial. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.
## Misuse and Out-of-scope Use
SmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:
- Prohibited Uses:
- Evaluating or scoring individuals (e.g., in employment, education, credit)
- Critical automated decision-making
- Generating unreliable factual content
- Malicious Activities:
- Spam generation
- Disinformation campaigns
- Harassment or abuse
- Unauthorized surveillance
### License
SmolVLM is built upon [the shape-optimized SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) as image encoder and [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) for text decoder part.
We release the SmolVLM checkpoints under the Apache 2.0 license.
## Training Details
### Training Data
The training data comes from [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Docmatix](https://huggingface.co/datasets/HuggingFaceM4/Docmatix) datasets, with emphasis on document understanding (25%) and image captioning (18%), while maintaining balanced coverage across other crucial capabilities like visual reasoning, chart comprehension, and general instruction following.
<img src="https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct/resolve/main/mixture_the_cauldron.png" alt="Example Image" style="width:90%;" />
## Evaluation
| Model | MMMU (val) | MathVista (testmini) | MMStar (val) | DocVQA (test) | TextVQA (val) | Min GPU RAM required (GB) |
|-------------------|------------|----------------------|--------------|---------------|---------------|---------------------------|
| SmolVLM | 38.8 | 44.6 | 42.1 | 81.6 | 72.7 | 5.02 |
| Qwen-VL 2B | 41.1 | 47.8 | 47.5 | 90.1 | 79.7 | 13.70 |
| InternVL2 2B | 34.3 | 46.3 | 49.8 | 86.9 | 73.4 | 10.52 |
| PaliGemma 3B 448px| 34.9 | 28.7 | 48.3 | 32.2 | 56.0 | 6.72 |
| moondream2 | 32.4 | 24.3 | 40.3 | 70.5 | 65.2 | 3.87 |
| MiniCPM-V-2 | 38.2 | 39.8 | 39.1 | 71.9 | 74.1 | 7.88 |
| MM1.5 1B | 35.8 | 37.2 | 0.0 | 81.0 | 72.5 | NaN | |
sachi1/3d-icon-Flux-LoRA | sachi1 | 2024-11-26T17:01:42Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-26T11:43:18Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: 3d icon in the style of <s0><s1>
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - sachi1/3d-icon-Flux-LoRA
<Gallery />
## Model description
These are sachi1/3d-icon-Flux-LoRA DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
Pivotal tuning was enabled: True.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Download model
[Download the *.safetensors LoRA](sachi1/3d-icon-Flux-LoRA/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('sachi1/3d-icon-Flux-LoRA', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='sachi1/3d-icon-Flux-LoRA', filename='3d-icon-Flux-LoRA_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
image = pipeline('3d icon in the style of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task2_organization_fold0 | MayBashendy | 2024-11-26T16:59:56Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T16:41:58Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task2_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task2_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7286
- Qwk: 0.3124
- Mse: 0.7286
- Rmse: 0.8536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0159 | 2 | 3.6352 | -0.0066 | 3.6352 | 1.9066 |
| No log | 0.0317 | 4 | 2.1072 | -0.0300 | 2.1072 | 1.4516 |
| No log | 0.0476 | 6 | 1.1555 | 0.1667 | 1.1555 | 1.0749 |
| No log | 0.0635 | 8 | 1.0165 | 0.0883 | 1.0165 | 1.0082 |
| No log | 0.0794 | 10 | 1.2512 | 0.1121 | 1.2512 | 1.1186 |
| No log | 0.0952 | 12 | 1.7923 | -0.0283 | 1.7923 | 1.3388 |
| No log | 0.1111 | 14 | 1.7765 | -0.0301 | 1.7765 | 1.3329 |
| No log | 0.1270 | 16 | 1.6100 | -0.0137 | 1.6100 | 1.2689 |
| No log | 0.1429 | 18 | 1.1681 | 0.0597 | 1.1681 | 1.0808 |
| No log | 0.1587 | 20 | 0.9982 | 0.0597 | 0.9982 | 0.9991 |
| No log | 0.1746 | 22 | 0.9529 | 0.0299 | 0.9529 | 0.9762 |
| No log | 0.1905 | 24 | 0.9482 | 0.0299 | 0.9482 | 0.9737 |
| No log | 0.2063 | 26 | 1.0978 | 0.0455 | 1.0978 | 1.0477 |
| No log | 0.2222 | 28 | 1.2548 | 0.0455 | 1.2548 | 1.1202 |
| No log | 0.2381 | 30 | 1.4135 | 0.0015 | 1.4135 | 1.1889 |
| No log | 0.2540 | 32 | 1.3315 | 0.0015 | 1.3315 | 1.1539 |
| No log | 0.2698 | 34 | 1.1148 | 0.0015 | 1.1148 | 1.0558 |
| No log | 0.2857 | 36 | 1.0052 | 0.0597 | 1.0052 | 1.0026 |
| No log | 0.3016 | 38 | 0.9113 | 0.1181 | 0.9113 | 0.9546 |
| No log | 0.3175 | 40 | 0.7844 | 0.2067 | 0.7844 | 0.8856 |
| No log | 0.3333 | 42 | 0.8079 | 0.1296 | 0.8079 | 0.8988 |
| No log | 0.3492 | 44 | 0.9058 | 0.1189 | 0.9058 | 0.9517 |
| No log | 0.3651 | 46 | 1.2217 | 0.0597 | 1.2217 | 1.1053 |
| No log | 0.3810 | 48 | 1.5327 | 0.0171 | 1.5327 | 1.2380 |
| No log | 0.3968 | 50 | 1.6171 | 0.0527 | 1.6171 | 1.2717 |
| No log | 0.4127 | 52 | 1.5538 | 0.0289 | 1.5538 | 1.2465 |
| No log | 0.4286 | 54 | 1.2962 | 0.1324 | 1.2962 | 1.1385 |
| No log | 0.4444 | 56 | 1.1371 | 0.0840 | 1.1371 | 1.0664 |
| No log | 0.4603 | 58 | 1.0421 | 0.1140 | 1.0421 | 1.0208 |
| No log | 0.4762 | 60 | 0.9956 | 0.1467 | 0.9956 | 0.9978 |
| No log | 0.4921 | 62 | 1.1772 | 0.1274 | 1.1772 | 1.0850 |
| No log | 0.5079 | 64 | 1.5136 | 0.0901 | 1.5136 | 1.2303 |
| No log | 0.5238 | 66 | 1.5974 | 0.1343 | 1.5974 | 1.2639 |
| No log | 0.5397 | 68 | 1.3733 | 0.0675 | 1.3733 | 1.1719 |
| No log | 0.5556 | 70 | 1.2176 | 0.1555 | 1.2176 | 1.1035 |
| No log | 0.5714 | 72 | 0.9895 | 0.1181 | 0.9895 | 0.9947 |
| No log | 0.5873 | 74 | 0.8524 | 0.1181 | 0.8524 | 0.9232 |
| No log | 0.6032 | 76 | 0.8128 | 0.1324 | 0.8128 | 0.9016 |
| No log | 0.6190 | 78 | 0.8423 | 0.0882 | 0.8423 | 0.9178 |
| No log | 0.6349 | 80 | 0.8314 | 0.1324 | 0.8314 | 0.9118 |
| No log | 0.6508 | 82 | 0.7935 | 0.2353 | 0.7935 | 0.8908 |
| No log | 0.6667 | 84 | 0.8547 | 0.2353 | 0.8547 | 0.9245 |
| No log | 0.6825 | 86 | 1.1488 | 0.2078 | 1.1488 | 1.0718 |
| No log | 0.6984 | 88 | 1.5992 | 0.0420 | 1.5992 | 1.2646 |
| No log | 0.7143 | 90 | 1.5432 | 0.0251 | 1.5432 | 1.2422 |
| No log | 0.7302 | 92 | 1.3048 | 0.1758 | 1.3048 | 1.1423 |
| No log | 0.7460 | 94 | 1.0220 | 0.1982 | 1.0220 | 1.0109 |
| No log | 0.7619 | 96 | 0.8579 | 0.2283 | 0.8579 | 0.9262 |
| No log | 0.7778 | 98 | 0.8712 | 0.2173 | 0.8712 | 0.9334 |
| No log | 0.7937 | 100 | 0.9867 | 0.2576 | 0.9867 | 0.9933 |
| No log | 0.8095 | 102 | 1.0381 | 0.2436 | 1.0381 | 1.0189 |
| No log | 0.8254 | 104 | 1.1300 | 0.2005 | 1.1300 | 1.0630 |
| No log | 0.8413 | 106 | 0.9891 | 0.2576 | 0.9891 | 0.9945 |
| No log | 0.8571 | 108 | 0.9050 | 0.2639 | 0.9050 | 0.9513 |
| No log | 0.8730 | 110 | 0.7171 | 0.3562 | 0.7171 | 0.8468 |
| No log | 0.8889 | 112 | 0.6440 | 0.3096 | 0.6440 | 0.8025 |
| No log | 0.9048 | 114 | 0.6347 | 0.3085 | 0.6347 | 0.7967 |
| No log | 0.9206 | 116 | 0.6798 | 0.3987 | 0.6798 | 0.8245 |
| No log | 0.9365 | 118 | 0.7767 | 0.3148 | 0.7767 | 0.8813 |
| No log | 0.9524 | 120 | 0.9303 | 0.2576 | 0.9303 | 0.9645 |
| No log | 0.9683 | 122 | 0.9052 | 0.2576 | 0.9052 | 0.9514 |
| No log | 0.9841 | 124 | 0.7744 | 0.2846 | 0.7744 | 0.8800 |
| No log | 1.0 | 126 | 0.7592 | 0.2776 | 0.7592 | 0.8713 |
| No log | 1.0159 | 128 | 0.7568 | 0.2888 | 0.7568 | 0.8699 |
| No log | 1.0317 | 130 | 0.7804 | 0.2294 | 0.7804 | 0.8834 |
| No log | 1.0476 | 132 | 0.7735 | 0.2608 | 0.7735 | 0.8795 |
| No log | 1.0635 | 134 | 0.7576 | 0.3523 | 0.7576 | 0.8704 |
| No log | 1.0794 | 136 | 0.9876 | 0.2587 | 0.9876 | 0.9938 |
| No log | 1.0952 | 138 | 1.3631 | 0.1275 | 1.3631 | 1.1675 |
| No log | 1.1111 | 140 | 1.4313 | 0.1676 | 1.4313 | 1.1964 |
| No log | 1.1270 | 142 | 1.2734 | 0.1648 | 1.2734 | 1.1284 |
| No log | 1.1429 | 144 | 1.0908 | 0.2334 | 1.0908 | 1.0444 |
| No log | 1.1587 | 146 | 0.9161 | 0.1324 | 0.9161 | 0.9571 |
| No log | 1.1746 | 148 | 0.7530 | 0.2342 | 0.7530 | 0.8678 |
| No log | 1.1905 | 150 | 0.6807 | 0.3523 | 0.6807 | 0.8250 |
| No log | 1.2063 | 152 | 0.6293 | 0.3357 | 0.6293 | 0.7933 |
| No log | 1.2222 | 154 | 0.6051 | 0.3443 | 0.6051 | 0.7779 |
| No log | 1.2381 | 156 | 0.6120 | 0.2810 | 0.6120 | 0.7823 |
| No log | 1.2540 | 158 | 0.6307 | 0.2969 | 0.6307 | 0.7942 |
| No log | 1.2698 | 160 | 0.6615 | 0.4272 | 0.6615 | 0.8133 |
| No log | 1.2857 | 162 | 0.6601 | 0.4124 | 0.6601 | 0.8125 |
| No log | 1.3016 | 164 | 0.6566 | 0.2186 | 0.6566 | 0.8103 |
| No log | 1.3175 | 166 | 0.7141 | 0.1494 | 0.7141 | 0.8450 |
| No log | 1.3333 | 168 | 0.7514 | 0.2386 | 0.7514 | 0.8668 |
| No log | 1.3492 | 170 | 0.7347 | 0.2547 | 0.7347 | 0.8572 |
| No log | 1.3651 | 172 | 0.6816 | 0.1668 | 0.6816 | 0.8256 |
| No log | 1.3810 | 174 | 0.6636 | 0.3484 | 0.6636 | 0.8146 |
| No log | 1.3968 | 176 | 0.6717 | 0.3799 | 0.6717 | 0.8196 |
| No log | 1.4127 | 178 | 0.6610 | 0.1867 | 0.6610 | 0.8130 |
| No log | 1.4286 | 180 | 0.6760 | 0.1977 | 0.6760 | 0.8222 |
| No log | 1.4444 | 182 | 0.6926 | 0.1668 | 0.6926 | 0.8322 |
| No log | 1.4603 | 184 | 0.7063 | 0.2229 | 0.7063 | 0.8404 |
| No log | 1.4762 | 186 | 0.7145 | 0.2396 | 0.7145 | 0.8453 |
| No log | 1.4921 | 188 | 0.7491 | 0.2521 | 0.7491 | 0.8655 |
| No log | 1.5079 | 190 | 0.7431 | 0.2584 | 0.7431 | 0.8621 |
| No log | 1.5238 | 192 | 0.7375 | 0.2726 | 0.7375 | 0.8588 |
| No log | 1.5397 | 194 | 0.7150 | 0.3313 | 0.7150 | 0.8456 |
| No log | 1.5556 | 196 | 0.7069 | 0.2381 | 0.7069 | 0.8408 |
| No log | 1.5714 | 198 | 0.6884 | 0.2039 | 0.6884 | 0.8297 |
| No log | 1.5873 | 200 | 0.6841 | 0.1694 | 0.6841 | 0.8271 |
| No log | 1.6032 | 202 | 0.6857 | 0.1990 | 0.6857 | 0.8281 |
| No log | 1.6190 | 204 | 0.6725 | 0.1400 | 0.6725 | 0.8201 |
| No log | 1.6349 | 206 | 0.7109 | 0.2966 | 0.7109 | 0.8431 |
| No log | 1.6508 | 208 | 0.6986 | 0.1910 | 0.6986 | 0.8358 |
| No log | 1.6667 | 210 | 0.6869 | 0.1861 | 0.6869 | 0.8288 |
| No log | 1.6825 | 212 | 0.6867 | 0.2145 | 0.6867 | 0.8287 |
| No log | 1.6984 | 214 | 0.6868 | 0.2419 | 0.6868 | 0.8287 |
| No log | 1.7143 | 216 | 0.7176 | 0.2934 | 0.7176 | 0.8471 |
| No log | 1.7302 | 218 | 0.8409 | 0.3250 | 0.8409 | 0.9170 |
| No log | 1.7460 | 220 | 0.7894 | 0.3460 | 0.7894 | 0.8885 |
| No log | 1.7619 | 222 | 0.6715 | 0.3950 | 0.6715 | 0.8195 |
| No log | 1.7778 | 224 | 0.7364 | 0.2205 | 0.7364 | 0.8581 |
| No log | 1.7937 | 226 | 0.7584 | 0.2193 | 0.7584 | 0.8709 |
| No log | 1.8095 | 228 | 0.6783 | 0.1930 | 0.6783 | 0.8236 |
| No log | 1.8254 | 230 | 0.7167 | 0.3589 | 0.7167 | 0.8466 |
| No log | 1.8413 | 232 | 0.7939 | 0.3691 | 0.7939 | 0.8910 |
| No log | 1.8571 | 234 | 0.7423 | 0.3623 | 0.7423 | 0.8615 |
| No log | 1.8730 | 236 | 0.6717 | 0.3066 | 0.6717 | 0.8196 |
| No log | 1.8889 | 238 | 0.7674 | 0.4225 | 0.7674 | 0.8760 |
| No log | 1.9048 | 240 | 0.8098 | 0.3495 | 0.8098 | 0.8999 |
| No log | 1.9206 | 242 | 0.7286 | 0.3446 | 0.7286 | 0.8536 |
| No log | 1.9365 | 244 | 0.6740 | 0.2562 | 0.6740 | 0.8210 |
| No log | 1.9524 | 246 | 0.7206 | 0.2771 | 0.7206 | 0.8489 |
| No log | 1.9683 | 248 | 0.7312 | 0.2924 | 0.7312 | 0.8551 |
| No log | 1.9841 | 250 | 0.6941 | 0.2595 | 0.6941 | 0.8331 |
| No log | 2.0 | 252 | 0.6865 | 0.2286 | 0.6865 | 0.8286 |
| No log | 2.0159 | 254 | 0.7172 | 0.2547 | 0.7172 | 0.8469 |
| No log | 2.0317 | 256 | 0.7543 | 0.2374 | 0.7543 | 0.8685 |
| No log | 2.0476 | 258 | 0.7801 | 0.2616 | 0.7801 | 0.8832 |
| No log | 2.0635 | 260 | 0.8139 | 0.3173 | 0.8139 | 0.9021 |
| No log | 2.0794 | 262 | 0.7447 | 0.2712 | 0.7447 | 0.8630 |
| No log | 2.0952 | 264 | 0.7484 | 0.2919 | 0.7484 | 0.8651 |
| No log | 2.1111 | 266 | 0.7140 | 0.2315 | 0.7140 | 0.8450 |
| No log | 2.1270 | 268 | 0.6944 | 0.2360 | 0.6944 | 0.8333 |
| No log | 2.1429 | 270 | 0.6954 | 0.1861 | 0.6954 | 0.8339 |
| No log | 2.1587 | 272 | 0.6844 | 0.1267 | 0.6844 | 0.8273 |
| No log | 2.1746 | 274 | 0.7417 | 0.1926 | 0.7417 | 0.8612 |
| No log | 2.1905 | 276 | 0.8317 | 0.2050 | 0.8317 | 0.9120 |
| No log | 2.2063 | 278 | 0.8175 | 0.1962 | 0.8175 | 0.9042 |
| No log | 2.2222 | 280 | 0.7382 | 0.2142 | 0.7382 | 0.8592 |
| No log | 2.2381 | 282 | 0.6661 | 0.1867 | 0.6661 | 0.8162 |
| No log | 2.2540 | 284 | 0.6638 | 0.1772 | 0.6638 | 0.8147 |
| No log | 2.2698 | 286 | 0.6696 | 0.1943 | 0.6696 | 0.8183 |
| No log | 2.2857 | 288 | 0.6641 | 0.1930 | 0.6641 | 0.8150 |
| No log | 2.3016 | 290 | 0.6617 | 0.2015 | 0.6617 | 0.8134 |
| No log | 2.3175 | 292 | 0.6898 | 0.2334 | 0.6898 | 0.8305 |
| No log | 2.3333 | 294 | 0.7181 | 0.2155 | 0.7181 | 0.8474 |
| No log | 2.3492 | 296 | 0.6880 | 0.2167 | 0.6880 | 0.8294 |
| No log | 2.3651 | 298 | 0.6686 | 0.1930 | 0.6686 | 0.8177 |
| No log | 2.3810 | 300 | 0.6561 | 0.1942 | 0.6561 | 0.8100 |
| No log | 2.3968 | 302 | 0.6486 | 0.2253 | 0.6486 | 0.8054 |
| No log | 2.4127 | 304 | 0.6478 | 0.2253 | 0.6478 | 0.8048 |
| No log | 2.4286 | 306 | 0.6467 | 0.2253 | 0.6467 | 0.8042 |
| No log | 2.4444 | 308 | 0.6470 | 0.2174 | 0.6470 | 0.8044 |
| No log | 2.4603 | 310 | 0.6493 | 0.2322 | 0.6493 | 0.8058 |
| No log | 2.4762 | 312 | 0.6555 | 0.2150 | 0.6555 | 0.8097 |
| No log | 2.4921 | 314 | 0.6577 | 0.1733 | 0.6577 | 0.8110 |
| No log | 2.5079 | 316 | 0.6626 | 0.2310 | 0.6626 | 0.8140 |
| No log | 2.5238 | 318 | 0.6791 | 0.2113 | 0.6791 | 0.8241 |
| No log | 2.5397 | 320 | 0.6872 | 0.2261 | 0.6872 | 0.8290 |
| No log | 2.5556 | 322 | 0.6825 | 0.2594 | 0.6825 | 0.8261 |
| No log | 2.5714 | 324 | 0.6880 | 0.3086 | 0.6880 | 0.8294 |
| No log | 2.5873 | 326 | 0.7320 | 0.2702 | 0.7320 | 0.8556 |
| No log | 2.6032 | 328 | 0.7231 | 0.2256 | 0.7231 | 0.8504 |
| No log | 2.6190 | 330 | 0.6954 | 0.3280 | 0.6954 | 0.8339 |
| No log | 2.6349 | 332 | 0.7100 | 0.2193 | 0.7100 | 0.8426 |
| No log | 2.6508 | 334 | 0.7105 | 0.2193 | 0.7105 | 0.8429 |
| No log | 2.6667 | 336 | 0.6932 | 0.3325 | 0.6932 | 0.8326 |
| No log | 2.6825 | 338 | 0.7012 | 0.2720 | 0.7012 | 0.8374 |
| No log | 2.6984 | 340 | 0.6905 | 0.2638 | 0.6905 | 0.8309 |
| No log | 2.7143 | 342 | 0.6867 | 0.3055 | 0.6867 | 0.8287 |
| No log | 2.7302 | 344 | 0.6816 | 0.2539 | 0.6816 | 0.8256 |
| No log | 2.7460 | 346 | 0.6774 | 0.2361 | 0.6774 | 0.8231 |
| No log | 2.7619 | 348 | 0.6640 | 0.2949 | 0.6640 | 0.8148 |
| No log | 2.7778 | 350 | 0.6424 | 0.3157 | 0.6424 | 0.8015 |
| No log | 2.7937 | 352 | 0.6380 | 0.2870 | 0.6380 | 0.7987 |
| No log | 2.8095 | 354 | 0.6341 | 0.3733 | 0.6341 | 0.7963 |
| No log | 2.8254 | 356 | 0.6301 | 0.2859 | 0.6301 | 0.7938 |
| No log | 2.8413 | 358 | 0.6297 | 0.2573 | 0.6297 | 0.7935 |
| No log | 2.8571 | 360 | 0.6321 | 0.2573 | 0.6321 | 0.7950 |
| No log | 2.8730 | 362 | 0.6338 | 0.2848 | 0.6338 | 0.7961 |
| No log | 2.8889 | 364 | 0.6460 | 0.3756 | 0.6460 | 0.8037 |
| No log | 2.9048 | 366 | 0.6511 | 0.3260 | 0.6511 | 0.8069 |
| No log | 2.9206 | 368 | 0.6893 | 0.2939 | 0.6893 | 0.8302 |
| No log | 2.9365 | 370 | 0.6689 | 0.3094 | 0.6689 | 0.8179 |
| No log | 2.9524 | 372 | 0.6406 | 0.2015 | 0.6406 | 0.8004 |
| No log | 2.9683 | 374 | 0.6423 | 0.1720 | 0.6423 | 0.8014 |
| No log | 2.9841 | 376 | 0.6482 | 0.2137 | 0.6482 | 0.8051 |
| No log | 3.0 | 378 | 0.6723 | 0.3094 | 0.6723 | 0.8199 |
| No log | 3.0159 | 380 | 0.7570 | 0.3253 | 0.7570 | 0.8701 |
| No log | 3.0317 | 382 | 0.7610 | 0.3253 | 0.7610 | 0.8724 |
| No log | 3.0476 | 384 | 0.6985 | 0.3380 | 0.6985 | 0.8358 |
| No log | 3.0635 | 386 | 0.6423 | 0.2371 | 0.6423 | 0.8015 |
| No log | 3.0794 | 388 | 0.6726 | 0.3757 | 0.6726 | 0.8201 |
| No log | 3.0952 | 390 | 0.7059 | 0.4320 | 0.7059 | 0.8402 |
| No log | 3.1111 | 392 | 0.6791 | 0.3508 | 0.6791 | 0.8241 |
| No log | 3.1270 | 394 | 0.6490 | 0.3808 | 0.6490 | 0.8056 |
| No log | 3.1429 | 396 | 0.6670 | 0.3730 | 0.6670 | 0.8167 |
| No log | 3.1587 | 398 | 0.6862 | 0.3485 | 0.6862 | 0.8284 |
| No log | 3.1746 | 400 | 0.6733 | 0.3315 | 0.6733 | 0.8206 |
| No log | 3.1905 | 402 | 0.6356 | 0.2798 | 0.6356 | 0.7972 |
| No log | 3.2063 | 404 | 0.6293 | 0.2532 | 0.6293 | 0.7933 |
| No log | 3.2222 | 406 | 0.6250 | 0.2382 | 0.6250 | 0.7906 |
| No log | 3.2381 | 408 | 0.6211 | 0.3587 | 0.6211 | 0.7881 |
| No log | 3.2540 | 410 | 0.6326 | 0.3313 | 0.6326 | 0.7953 |
| No log | 3.2698 | 412 | 0.6986 | 0.3485 | 0.6986 | 0.8358 |
| No log | 3.2857 | 414 | 0.7239 | 0.4087 | 0.7239 | 0.8508 |
| No log | 3.3016 | 416 | 0.6740 | 0.4130 | 0.6740 | 0.8210 |
| No log | 3.3175 | 418 | 0.6565 | 0.2585 | 0.6565 | 0.8102 |
| No log | 3.3333 | 420 | 0.6831 | 0.3126 | 0.6831 | 0.8265 |
| No log | 3.3492 | 422 | 0.6750 | 0.3106 | 0.6750 | 0.8216 |
| No log | 3.3651 | 424 | 0.6558 | 0.2371 | 0.6558 | 0.8098 |
| No log | 3.3810 | 426 | 0.6575 | 0.2638 | 0.6575 | 0.8109 |
| No log | 3.3968 | 428 | 0.6837 | 0.2286 | 0.6837 | 0.8268 |
| No log | 3.4127 | 430 | 0.7042 | 0.3339 | 0.7042 | 0.8392 |
| No log | 3.4286 | 432 | 0.6899 | 0.3485 | 0.6899 | 0.8306 |
| No log | 3.4444 | 434 | 0.6559 | 0.3833 | 0.6559 | 0.8099 |
| No log | 3.4603 | 436 | 0.6507 | 0.3833 | 0.6507 | 0.8066 |
| No log | 3.4762 | 438 | 0.6560 | 0.3165 | 0.6560 | 0.8100 |
| No log | 3.4921 | 440 | 0.6803 | 0.3587 | 0.6803 | 0.8248 |
| No log | 3.5079 | 442 | 0.7077 | 0.3436 | 0.7077 | 0.8412 |
| No log | 3.5238 | 444 | 0.7023 | 0.3436 | 0.7023 | 0.8380 |
| No log | 3.5397 | 446 | 0.6623 | 0.2606 | 0.6623 | 0.8138 |
| No log | 3.5556 | 448 | 0.6423 | 0.2947 | 0.6423 | 0.8014 |
| No log | 3.5714 | 450 | 0.6656 | 0.2850 | 0.6656 | 0.8158 |
| No log | 3.5873 | 452 | 0.6685 | 0.2850 | 0.6685 | 0.8176 |
| No log | 3.6032 | 454 | 0.6602 | 0.2607 | 0.6602 | 0.8125 |
| No log | 3.6190 | 456 | 0.6606 | 0.2481 | 0.6606 | 0.8128 |
| No log | 3.6349 | 458 | 0.6685 | 0.2322 | 0.6685 | 0.8176 |
| No log | 3.6508 | 460 | 0.6760 | 0.2186 | 0.6760 | 0.8222 |
| No log | 3.6667 | 462 | 0.6814 | 0.2870 | 0.6814 | 0.8254 |
| No log | 3.6825 | 464 | 0.6885 | 0.3222 | 0.6885 | 0.8298 |
| No log | 3.6984 | 466 | 0.7414 | 0.2880 | 0.7414 | 0.8611 |
| No log | 3.7143 | 468 | 0.7892 | 0.2616 | 0.7892 | 0.8884 |
| No log | 3.7302 | 470 | 0.7817 | 0.2973 | 0.7817 | 0.8841 |
| No log | 3.7460 | 472 | 0.7460 | 0.2772 | 0.7460 | 0.8637 |
| No log | 3.7619 | 474 | 0.7002 | 0.3094 | 0.7002 | 0.8368 |
| No log | 3.7778 | 476 | 0.6667 | 0.3041 | 0.6667 | 0.8165 |
| No log | 3.7937 | 478 | 0.6552 | 0.3201 | 0.6552 | 0.8095 |
| No log | 3.8095 | 480 | 0.6575 | 0.3201 | 0.6575 | 0.8109 |
| No log | 3.8254 | 482 | 0.6529 | 0.3041 | 0.6529 | 0.8080 |
| No log | 3.8413 | 484 | 0.6434 | 0.3041 | 0.6434 | 0.8021 |
| No log | 3.8571 | 486 | 0.6491 | 0.3041 | 0.6491 | 0.8056 |
| No log | 3.8730 | 488 | 0.6575 | 0.2880 | 0.6575 | 0.8109 |
| No log | 3.8889 | 490 | 0.6612 | 0.2720 | 0.6612 | 0.8131 |
| No log | 3.9048 | 492 | 0.6805 | 0.2772 | 0.6805 | 0.8249 |
| No log | 3.9206 | 494 | 0.7015 | 0.2973 | 0.7015 | 0.8375 |
| No log | 3.9365 | 496 | 0.6844 | 0.2983 | 0.6844 | 0.8273 |
| No log | 3.9524 | 498 | 0.6682 | 0.3134 | 0.6682 | 0.8174 |
| 0.3573 | 3.9683 | 500 | 0.6662 | 0.3286 | 0.6662 | 0.8162 |
| 0.3573 | 3.9841 | 502 | 0.6698 | 0.3134 | 0.6698 | 0.8184 |
| 0.3573 | 4.0 | 504 | 0.6615 | 0.3134 | 0.6615 | 0.8133 |
| 0.3573 | 4.0159 | 506 | 0.6563 | 0.3286 | 0.6563 | 0.8101 |
| 0.3573 | 4.0317 | 508 | 0.6673 | 0.3134 | 0.6673 | 0.8169 |
| 0.3573 | 4.0476 | 510 | 0.7149 | 0.2983 | 0.7149 | 0.8455 |
| 0.3573 | 4.0635 | 512 | 0.7928 | 0.3662 | 0.7928 | 0.8904 |
| 0.3573 | 4.0794 | 514 | 0.8271 | 0.3343 | 0.8271 | 0.9095 |
| 0.3573 | 4.0952 | 516 | 0.7716 | 0.3629 | 0.7716 | 0.8784 |
| 0.3573 | 4.1111 | 518 | 0.6812 | 0.2983 | 0.6812 | 0.8253 |
| 0.3573 | 4.1270 | 520 | 0.6331 | 0.2777 | 0.6331 | 0.7957 |
| 0.3573 | 4.1429 | 522 | 0.6327 | 0.2346 | 0.6327 | 0.7954 |
| 0.3573 | 4.1587 | 524 | 0.6302 | 0.2493 | 0.6302 | 0.7938 |
| 0.3573 | 4.1746 | 526 | 0.6342 | 0.2594 | 0.6342 | 0.7963 |
| 0.3573 | 4.1905 | 528 | 0.6522 | 0.2559 | 0.6522 | 0.8076 |
| 0.3573 | 4.2063 | 530 | 0.6670 | 0.2398 | 0.6670 | 0.8167 |
| 0.3573 | 4.2222 | 532 | 0.6669 | 0.2559 | 0.6669 | 0.8166 |
| 0.3573 | 4.2381 | 534 | 0.6637 | 0.2155 | 0.6637 | 0.8147 |
| 0.3573 | 4.2540 | 536 | 0.6790 | 0.2155 | 0.6790 | 0.8240 |
| 0.3573 | 4.2698 | 538 | 0.6773 | 0.2155 | 0.6773 | 0.8230 |
| 0.3573 | 4.2857 | 540 | 0.6729 | 0.3286 | 0.6729 | 0.8203 |
| 0.3573 | 4.3016 | 542 | 0.6797 | 0.3286 | 0.6797 | 0.8244 |
| 0.3573 | 4.3175 | 544 | 0.6826 | 0.3145 | 0.6826 | 0.8262 |
| 0.3573 | 4.3333 | 546 | 0.7285 | 0.2983 | 0.7285 | 0.8535 |
| 0.3573 | 4.3492 | 548 | 0.8302 | 0.2459 | 0.8302 | 0.9111 |
| 0.3573 | 4.3651 | 550 | 0.8892 | 0.2279 | 0.8892 | 0.9430 |
| 0.3573 | 4.3810 | 552 | 0.8707 | 0.2436 | 0.8707 | 0.9331 |
| 0.3573 | 4.3968 | 554 | 0.8006 | 0.2436 | 0.8006 | 0.8948 |
| 0.3573 | 4.4127 | 556 | 0.7106 | 0.2471 | 0.7106 | 0.8429 |
| 0.3573 | 4.4286 | 558 | 0.6674 | 0.2880 | 0.6674 | 0.8169 |
| 0.3573 | 4.4444 | 560 | 0.6568 | 0.2880 | 0.6568 | 0.8105 |
| 0.3573 | 4.4603 | 562 | 0.6724 | 0.2720 | 0.6724 | 0.8200 |
| 0.3573 | 4.4762 | 564 | 0.6978 | 0.2708 | 0.6978 | 0.8354 |
| 0.3573 | 4.4921 | 566 | 0.6959 | 0.2708 | 0.6959 | 0.8342 |
| 0.3573 | 4.5079 | 568 | 0.6954 | 0.2869 | 0.6954 | 0.8339 |
| 0.3573 | 4.5238 | 570 | 0.6852 | 0.2720 | 0.6852 | 0.8278 |
| 0.3573 | 4.5397 | 572 | 0.7043 | 0.2928 | 0.7043 | 0.8392 |
| 0.3573 | 4.5556 | 574 | 0.7280 | 0.3114 | 0.7280 | 0.8532 |
| 0.3573 | 4.5714 | 576 | 0.7415 | 0.2962 | 0.7415 | 0.8611 |
| 0.3573 | 4.5873 | 578 | 0.7029 | 0.3124 | 0.7029 | 0.8384 |
| 0.3573 | 4.6032 | 580 | 0.6522 | 0.3094 | 0.6522 | 0.8076 |
| 0.3573 | 4.6190 | 582 | 0.6272 | 0.3094 | 0.6272 | 0.7920 |
| 0.3573 | 4.6349 | 584 | 0.6342 | 0.3094 | 0.6342 | 0.7963 |
| 0.3573 | 4.6508 | 586 | 0.6669 | 0.3073 | 0.6669 | 0.8167 |
| 0.3573 | 4.6667 | 588 | 0.6890 | 0.2374 | 0.6890 | 0.8301 |
| 0.3573 | 4.6825 | 590 | 0.6732 | 0.2917 | 0.6732 | 0.8205 |
| 0.3573 | 4.6984 | 592 | 0.6532 | 0.2858 | 0.6532 | 0.8082 |
| 0.3573 | 4.7143 | 594 | 0.6443 | 0.3179 | 0.6443 | 0.8027 |
| 0.3573 | 4.7302 | 596 | 0.6368 | 0.3041 | 0.6368 | 0.7980 |
| 0.3573 | 4.7460 | 598 | 0.6295 | 0.2286 | 0.6295 | 0.7934 |
| 0.3573 | 4.7619 | 600 | 0.6305 | 0.2286 | 0.6305 | 0.7940 |
| 0.3573 | 4.7778 | 602 | 0.6364 | 0.3041 | 0.6364 | 0.7978 |
| 0.3573 | 4.7937 | 604 | 0.6543 | 0.2880 | 0.6543 | 0.8089 |
| 0.3573 | 4.8095 | 606 | 0.6895 | 0.2697 | 0.6895 | 0.8303 |
| 0.3573 | 4.8254 | 608 | 0.7259 | 0.2536 | 0.7259 | 0.8520 |
| 0.3573 | 4.8413 | 610 | 0.7564 | 0.2604 | 0.7564 | 0.8697 |
| 0.3573 | 4.8571 | 612 | 0.7399 | 0.2761 | 0.7399 | 0.8602 |
| 0.3573 | 4.8730 | 614 | 0.7139 | 0.2917 | 0.7139 | 0.8449 |
| 0.3573 | 4.8889 | 616 | 0.6678 | 0.3587 | 0.6678 | 0.8172 |
| 0.3573 | 4.9048 | 618 | 0.6344 | 0.3105 | 0.6344 | 0.7965 |
| 0.3573 | 4.9206 | 620 | 0.6306 | 0.2827 | 0.6306 | 0.7941 |
| 0.3573 | 4.9365 | 622 | 0.6421 | 0.3485 | 0.6421 | 0.8013 |
| 0.3573 | 4.9524 | 624 | 0.6556 | 0.3485 | 0.6556 | 0.8097 |
| 0.3573 | 4.9683 | 626 | 0.6802 | 0.3476 | 0.6802 | 0.8247 |
| 0.3573 | 4.9841 | 628 | 0.7055 | 0.3266 | 0.7055 | 0.8400 |
| 0.3573 | 5.0 | 630 | 0.7062 | 0.2906 | 0.7062 | 0.8404 |
| 0.3573 | 5.0159 | 632 | 0.7070 | 0.2536 | 0.7070 | 0.8408 |
| 0.3573 | 5.0317 | 634 | 0.6849 | 0.2547 | 0.6849 | 0.8276 |
| 0.3573 | 5.0476 | 636 | 0.6448 | 0.2949 | 0.6448 | 0.8030 |
| 0.3573 | 5.0635 | 638 | 0.6255 | 0.3062 | 0.6255 | 0.7909 |
| 0.3573 | 5.0794 | 640 | 0.6200 | 0.2925 | 0.6200 | 0.7874 |
| 0.3573 | 5.0952 | 642 | 0.6256 | 0.2936 | 0.6256 | 0.7909 |
| 0.3573 | 5.1111 | 644 | 0.6305 | 0.2925 | 0.6305 | 0.7941 |
| 0.3573 | 5.1270 | 646 | 0.6419 | 0.2446 | 0.6419 | 0.8012 |
| 0.3573 | 5.1429 | 648 | 0.6721 | 0.2949 | 0.6721 | 0.8198 |
| 0.3573 | 5.1587 | 650 | 0.7308 | 0.2386 | 0.7308 | 0.8549 |
| 0.3573 | 5.1746 | 652 | 0.7674 | 0.2761 | 0.7674 | 0.8760 |
| 0.3573 | 5.1905 | 654 | 0.7627 | 0.2386 | 0.7627 | 0.8733 |
| 0.3573 | 5.2063 | 656 | 0.7425 | 0.2627 | 0.7425 | 0.8617 |
| 0.3573 | 5.2222 | 658 | 0.7130 | 0.2939 | 0.7130 | 0.8444 |
| 0.3573 | 5.2381 | 660 | 0.6846 | 0.2805 | 0.6846 | 0.8274 |
| 0.3573 | 5.2540 | 662 | 0.6750 | 0.3115 | 0.6750 | 0.8216 |
| 0.3573 | 5.2698 | 664 | 0.6707 | 0.3062 | 0.6707 | 0.8189 |
| 0.3573 | 5.2857 | 666 | 0.6746 | 0.3115 | 0.6746 | 0.8213 |
| 0.3573 | 5.3016 | 668 | 0.6790 | 0.2960 | 0.6790 | 0.8240 |
| 0.3573 | 5.3175 | 670 | 0.6771 | 0.2731 | 0.6771 | 0.8229 |
| 0.3573 | 5.3333 | 672 | 0.6697 | 0.2731 | 0.6697 | 0.8183 |
| 0.3573 | 5.3492 | 674 | 0.6601 | 0.3211 | 0.6601 | 0.8125 |
| 0.3573 | 5.3651 | 676 | 0.6560 | 0.3211 | 0.6560 | 0.8099 |
| 0.3573 | 5.3810 | 678 | 0.6616 | 0.2892 | 0.6616 | 0.8134 |
| 0.3573 | 5.3968 | 680 | 0.6702 | 0.2731 | 0.6702 | 0.8186 |
| 0.3573 | 5.4127 | 682 | 0.6719 | 0.2731 | 0.6719 | 0.8197 |
| 0.3573 | 5.4286 | 684 | 0.6658 | 0.2949 | 0.6658 | 0.8160 |
| 0.3573 | 5.4444 | 686 | 0.6641 | 0.2949 | 0.6641 | 0.8149 |
| 0.3573 | 5.4603 | 688 | 0.6605 | 0.3105 | 0.6605 | 0.8127 |
| 0.3573 | 5.4762 | 690 | 0.6700 | 0.2949 | 0.6700 | 0.8185 |
| 0.3573 | 5.4921 | 692 | 0.7062 | 0.3286 | 0.7062 | 0.8404 |
| 0.3573 | 5.5079 | 694 | 0.7527 | 0.3417 | 0.7527 | 0.8676 |
| 0.3573 | 5.5238 | 696 | 0.7614 | 0.3417 | 0.7614 | 0.8726 |
| 0.3573 | 5.5397 | 698 | 0.7312 | 0.3073 | 0.7312 | 0.8551 |
| 0.3573 | 5.5556 | 700 | 0.6939 | 0.3041 | 0.6939 | 0.8330 |
| 0.3573 | 5.5714 | 702 | 0.6518 | 0.2892 | 0.6518 | 0.8073 |
| 0.3573 | 5.5873 | 704 | 0.6330 | 0.2594 | 0.6330 | 0.7956 |
| 0.3573 | 5.6032 | 706 | 0.6309 | 0.2594 | 0.6309 | 0.7943 |
| 0.3573 | 5.6190 | 708 | 0.6395 | 0.2892 | 0.6395 | 0.7997 |
| 0.3573 | 5.6349 | 710 | 0.6613 | 0.3030 | 0.6613 | 0.8132 |
| 0.3573 | 5.6508 | 712 | 0.6862 | 0.2858 | 0.6862 | 0.8284 |
| 0.3573 | 5.6667 | 714 | 0.6885 | 0.2858 | 0.6885 | 0.8297 |
| 0.3573 | 5.6825 | 716 | 0.6750 | 0.3019 | 0.6750 | 0.8216 |
| 0.3573 | 5.6984 | 718 | 0.6546 | 0.3041 | 0.6546 | 0.8091 |
| 0.3573 | 5.7143 | 720 | 0.6431 | 0.3201 | 0.6431 | 0.8019 |
| 0.3573 | 5.7302 | 722 | 0.6426 | 0.2446 | 0.6426 | 0.8016 |
| 0.3573 | 5.7460 | 724 | 0.6515 | 0.2765 | 0.6515 | 0.8072 |
| 0.3573 | 5.7619 | 726 | 0.6622 | 0.2446 | 0.6622 | 0.8137 |
| 0.3573 | 5.7778 | 728 | 0.6781 | 0.2516 | 0.6781 | 0.8235 |
| 0.3573 | 5.7937 | 730 | 0.7140 | 0.3436 | 0.7140 | 0.8450 |
| 0.3573 | 5.8095 | 732 | 0.7658 | 0.3310 | 0.7658 | 0.8751 |
| 0.3573 | 5.8254 | 734 | 0.8147 | 0.3486 | 0.8147 | 0.9026 |
| 0.3573 | 5.8413 | 736 | 0.8395 | 0.3343 | 0.8395 | 0.9162 |
| 0.3573 | 5.8571 | 738 | 0.8368 | 0.3005 | 0.8368 | 0.9148 |
| 0.3573 | 5.8730 | 740 | 0.8125 | 0.3005 | 0.8125 | 0.9014 |
| 0.3573 | 5.8889 | 742 | 0.7578 | 0.3310 | 0.7578 | 0.8705 |
| 0.3573 | 5.9048 | 744 | 0.7054 | 0.3476 | 0.7054 | 0.8399 |
| 0.3573 | 5.9206 | 746 | 0.6821 | 0.2361 | 0.6821 | 0.8259 |
| 0.3573 | 5.9365 | 748 | 0.6666 | 0.2606 | 0.6666 | 0.8165 |
| 0.3573 | 5.9524 | 750 | 0.6656 | 0.2446 | 0.6656 | 0.8158 |
| 0.3573 | 5.9683 | 752 | 0.6698 | 0.2446 | 0.6698 | 0.8184 |
| 0.3573 | 5.9841 | 754 | 0.6838 | 0.2505 | 0.6838 | 0.8269 |
| 0.3573 | 6.0 | 756 | 0.7174 | 0.3310 | 0.7174 | 0.8470 |
| 0.3573 | 6.0159 | 758 | 0.7374 | 0.3163 | 0.7374 | 0.8587 |
| 0.3573 | 6.0317 | 760 | 0.7231 | 0.3163 | 0.7231 | 0.8504 |
| 0.3573 | 6.0476 | 762 | 0.7170 | 0.2821 | 0.7170 | 0.8467 |
| 0.3573 | 6.0635 | 764 | 0.7025 | 0.2821 | 0.7025 | 0.8382 |
| 0.3573 | 6.0794 | 766 | 0.6786 | 0.2917 | 0.6786 | 0.8238 |
| 0.3573 | 6.0952 | 768 | 0.6570 | 0.3384 | 0.6570 | 0.8105 |
| 0.3573 | 6.1111 | 770 | 0.6520 | 0.3239 | 0.6520 | 0.8075 |
| 0.3573 | 6.1270 | 772 | 0.6369 | 0.2949 | 0.6369 | 0.7981 |
| 0.3573 | 6.1429 | 774 | 0.6310 | 0.2286 | 0.6310 | 0.7943 |
| 0.3573 | 6.1587 | 776 | 0.6309 | 0.2434 | 0.6309 | 0.7943 |
| 0.3573 | 6.1746 | 778 | 0.6385 | 0.2949 | 0.6385 | 0.7991 |
| 0.3573 | 6.1905 | 780 | 0.6502 | 0.3384 | 0.6502 | 0.8063 |
| 0.3573 | 6.2063 | 782 | 0.6519 | 0.3384 | 0.6519 | 0.8074 |
| 0.3573 | 6.2222 | 784 | 0.6527 | 0.3384 | 0.6527 | 0.8079 |
| 0.3573 | 6.2381 | 786 | 0.6477 | 0.3239 | 0.6477 | 0.8048 |
| 0.3573 | 6.2540 | 788 | 0.6433 | 0.3094 | 0.6433 | 0.8021 |
| 0.3573 | 6.2698 | 790 | 0.6379 | 0.2434 | 0.6379 | 0.7987 |
| 0.3573 | 6.2857 | 792 | 0.6291 | 0.2286 | 0.6291 | 0.7931 |
| 0.3573 | 6.3016 | 794 | 0.6292 | 0.2286 | 0.6292 | 0.7932 |
| 0.3573 | 6.3175 | 796 | 0.6342 | 0.2434 | 0.6342 | 0.7964 |
| 0.3573 | 6.3333 | 798 | 0.6440 | 0.2583 | 0.6440 | 0.8025 |
| 0.3573 | 6.3492 | 800 | 0.6589 | 0.3094 | 0.6589 | 0.8117 |
| 0.3573 | 6.3651 | 802 | 0.6799 | 0.3276 | 0.6799 | 0.8246 |
| 0.3573 | 6.3810 | 804 | 0.6927 | 0.3457 | 0.6927 | 0.8323 |
| 0.3573 | 6.3968 | 806 | 0.6792 | 0.3751 | 0.6792 | 0.8241 |
| 0.3573 | 6.4127 | 808 | 0.6665 | 0.3476 | 0.6665 | 0.8164 |
| 0.3573 | 6.4286 | 810 | 0.6630 | 0.3476 | 0.6630 | 0.8142 |
| 0.3573 | 6.4444 | 812 | 0.6624 | 0.3145 | 0.6624 | 0.8138 |
| 0.3573 | 6.4603 | 814 | 0.6744 | 0.3476 | 0.6744 | 0.8212 |
| 0.3573 | 6.4762 | 816 | 0.6950 | 0.3457 | 0.6950 | 0.8336 |
| 0.3573 | 6.4921 | 818 | 0.6940 | 0.3310 | 0.6940 | 0.8331 |
| 0.3573 | 6.5079 | 820 | 0.6837 | 0.2973 | 0.6837 | 0.8269 |
| 0.3573 | 6.5238 | 822 | 0.6755 | 0.3073 | 0.6755 | 0.8219 |
| 0.3573 | 6.5397 | 824 | 0.6681 | 0.3019 | 0.6681 | 0.8174 |
| 0.3573 | 6.5556 | 826 | 0.6611 | 0.2571 | 0.6611 | 0.8131 |
| 0.3573 | 6.5714 | 828 | 0.6639 | 0.2571 | 0.6639 | 0.8148 |
| 0.3573 | 6.5873 | 830 | 0.6796 | 0.3019 | 0.6796 | 0.8244 |
| 0.3573 | 6.6032 | 832 | 0.6921 | 0.2858 | 0.6921 | 0.8319 |
| 0.3573 | 6.6190 | 834 | 0.6964 | 0.2858 | 0.6964 | 0.8345 |
| 0.3573 | 6.6349 | 836 | 0.6885 | 0.3179 | 0.6885 | 0.8298 |
| 0.3573 | 6.6508 | 838 | 0.6832 | 0.3179 | 0.6832 | 0.8266 |
| 0.3573 | 6.6667 | 840 | 0.6792 | 0.3179 | 0.6792 | 0.8241 |
| 0.3573 | 6.6825 | 842 | 0.6769 | 0.3179 | 0.6769 | 0.8227 |
| 0.3573 | 6.6984 | 844 | 0.6724 | 0.3179 | 0.6724 | 0.8200 |
| 0.3573 | 6.7143 | 846 | 0.6706 | 0.3179 | 0.6706 | 0.8189 |
| 0.3573 | 6.7302 | 848 | 0.6769 | 0.3179 | 0.6769 | 0.8227 |
| 0.3573 | 6.7460 | 850 | 0.6783 | 0.3179 | 0.6783 | 0.8236 |
| 0.3573 | 6.7619 | 852 | 0.6705 | 0.3179 | 0.6705 | 0.8188 |
| 0.3573 | 6.7778 | 854 | 0.6683 | 0.3179 | 0.6683 | 0.8175 |
| 0.3573 | 6.7937 | 856 | 0.6743 | 0.3179 | 0.6743 | 0.8212 |
| 0.3573 | 6.8095 | 858 | 0.6933 | 0.2858 | 0.6933 | 0.8327 |
| 0.3573 | 6.8254 | 860 | 0.7159 | 0.2917 | 0.7159 | 0.8461 |
| 0.3573 | 6.8413 | 862 | 0.7195 | 0.2917 | 0.7195 | 0.8482 |
| 0.3573 | 6.8571 | 864 | 0.7103 | 0.2917 | 0.7103 | 0.8428 |
| 0.3573 | 6.8730 | 866 | 0.6924 | 0.3179 | 0.6924 | 0.8321 |
| 0.3573 | 6.8889 | 868 | 0.6733 | 0.3030 | 0.6733 | 0.8205 |
| 0.3573 | 6.9048 | 870 | 0.6558 | 0.3190 | 0.6558 | 0.8098 |
| 0.3573 | 6.9206 | 872 | 0.6545 | 0.3201 | 0.6545 | 0.8090 |
| 0.3573 | 6.9365 | 874 | 0.6631 | 0.3190 | 0.6631 | 0.8143 |
| 0.3573 | 6.9524 | 876 | 0.6702 | 0.3190 | 0.6702 | 0.8187 |
| 0.3573 | 6.9683 | 878 | 0.6685 | 0.3041 | 0.6685 | 0.8176 |
| 0.3573 | 6.9841 | 880 | 0.6762 | 0.3190 | 0.6762 | 0.8223 |
| 0.3573 | 7.0 | 882 | 0.6939 | 0.3384 | 0.6939 | 0.8330 |
| 0.3573 | 7.0159 | 884 | 0.7085 | 0.3604 | 0.7085 | 0.8417 |
| 0.3573 | 7.0317 | 886 | 0.7182 | 0.3604 | 0.7182 | 0.8475 |
| 0.3573 | 7.0476 | 888 | 0.7368 | 0.3457 | 0.7368 | 0.8584 |
| 0.3573 | 7.0635 | 890 | 0.7542 | 0.3163 | 0.7542 | 0.8685 |
| 0.3573 | 7.0794 | 892 | 0.7442 | 0.3457 | 0.7442 | 0.8627 |
| 0.3573 | 7.0952 | 894 | 0.7168 | 0.3604 | 0.7168 | 0.8467 |
| 0.3573 | 7.1111 | 896 | 0.6963 | 0.3427 | 0.6963 | 0.8344 |
| 0.3573 | 7.1270 | 898 | 0.6823 | 0.3094 | 0.6823 | 0.8260 |
| 0.3573 | 7.1429 | 900 | 0.6786 | 0.3094 | 0.6786 | 0.8238 |
| 0.3573 | 7.1587 | 902 | 0.6828 | 0.3094 | 0.6828 | 0.8263 |
| 0.3573 | 7.1746 | 904 | 0.6835 | 0.3094 | 0.6835 | 0.8267 |
| 0.3573 | 7.1905 | 906 | 0.6930 | 0.3084 | 0.6930 | 0.8324 |
| 0.3573 | 7.2063 | 908 | 0.7248 | 0.3457 | 0.7248 | 0.8513 |
| 0.3573 | 7.2222 | 910 | 0.7552 | 0.3457 | 0.7552 | 0.8690 |
| 0.3573 | 7.2381 | 912 | 0.7869 | 0.3310 | 0.7869 | 0.8871 |
| 0.3573 | 7.2540 | 914 | 0.7971 | 0.3310 | 0.7971 | 0.8928 |
| 0.3573 | 7.2698 | 916 | 0.7782 | 0.3457 | 0.7782 | 0.8821 |
| 0.3573 | 7.2857 | 918 | 0.7539 | 0.3457 | 0.7539 | 0.8682 |
| 0.3573 | 7.3016 | 920 | 0.7248 | 0.3124 | 0.7248 | 0.8514 |
| 0.3573 | 7.3175 | 922 | 0.6984 | 0.3229 | 0.6984 | 0.8357 |
| 0.3573 | 7.3333 | 924 | 0.6842 | 0.3030 | 0.6842 | 0.8271 |
| 0.3573 | 7.3492 | 926 | 0.6694 | 0.3041 | 0.6694 | 0.8182 |
| 0.3573 | 7.3651 | 928 | 0.6630 | 0.3201 | 0.6630 | 0.8142 |
| 0.3573 | 7.3810 | 930 | 0.6607 | 0.3201 | 0.6607 | 0.8129 |
| 0.3573 | 7.3968 | 932 | 0.6642 | 0.3201 | 0.6642 | 0.8150 |
| 0.3573 | 7.4127 | 934 | 0.6729 | 0.3190 | 0.6729 | 0.8203 |
| 0.3573 | 7.4286 | 936 | 0.6766 | 0.3190 | 0.6766 | 0.8225 |
| 0.3573 | 7.4444 | 938 | 0.6881 | 0.3179 | 0.6881 | 0.8295 |
| 0.3573 | 7.4603 | 940 | 0.7041 | 0.3179 | 0.7041 | 0.8391 |
| 0.3573 | 7.4762 | 942 | 0.7085 | 0.3179 | 0.7085 | 0.8417 |
| 0.3573 | 7.4921 | 944 | 0.7119 | 0.3229 | 0.7119 | 0.8438 |
| 0.3573 | 7.5079 | 946 | 0.7066 | 0.3229 | 0.7066 | 0.8406 |
| 0.3573 | 7.5238 | 948 | 0.7000 | 0.3229 | 0.7000 | 0.8367 |
| 0.3573 | 7.5397 | 950 | 0.6937 | 0.3179 | 0.6937 | 0.8329 |
| 0.3573 | 7.5556 | 952 | 0.6970 | 0.3229 | 0.6970 | 0.8348 |
| 0.3573 | 7.5714 | 954 | 0.7036 | 0.3276 | 0.7036 | 0.8388 |
| 0.3573 | 7.5873 | 956 | 0.7025 | 0.3276 | 0.7025 | 0.8382 |
| 0.3573 | 7.6032 | 958 | 0.7093 | 0.3604 | 0.7093 | 0.8422 |
| 0.3573 | 7.6190 | 960 | 0.7190 | 0.3457 | 0.7190 | 0.8480 |
| 0.3573 | 7.6349 | 962 | 0.7197 | 0.3457 | 0.7197 | 0.8484 |
| 0.3573 | 7.6508 | 964 | 0.7050 | 0.3604 | 0.7050 | 0.8397 |
| 0.3573 | 7.6667 | 966 | 0.6883 | 0.3751 | 0.6883 | 0.8296 |
| 0.3573 | 7.6825 | 968 | 0.6789 | 0.3427 | 0.6789 | 0.8239 |
| 0.3573 | 7.6984 | 970 | 0.6697 | 0.3427 | 0.6697 | 0.8184 |
| 0.3573 | 7.7143 | 972 | 0.6628 | 0.3239 | 0.6628 | 0.8141 |
| 0.3573 | 7.7302 | 974 | 0.6610 | 0.3190 | 0.6610 | 0.8130 |
| 0.3573 | 7.7460 | 976 | 0.6682 | 0.3239 | 0.6682 | 0.8174 |
| 0.3573 | 7.7619 | 978 | 0.6813 | 0.3427 | 0.6813 | 0.8254 |
| 0.3573 | 7.7778 | 980 | 0.7017 | 0.3604 | 0.7017 | 0.8377 |
| 0.3573 | 7.7937 | 982 | 0.7280 | 0.3457 | 0.7280 | 0.8533 |
| 0.3573 | 7.8095 | 984 | 0.7491 | 0.3310 | 0.7491 | 0.8655 |
| 0.3573 | 7.8254 | 986 | 0.7591 | 0.3310 | 0.7591 | 0.8713 |
| 0.3573 | 7.8413 | 988 | 0.7431 | 0.3310 | 0.7431 | 0.8620 |
| 0.3573 | 7.8571 | 990 | 0.7187 | 0.3310 | 0.7187 | 0.8478 |
| 0.3573 | 7.8730 | 992 | 0.7004 | 0.3604 | 0.7004 | 0.8369 |
| 0.3573 | 7.8889 | 994 | 0.6931 | 0.3229 | 0.6931 | 0.8325 |
| 0.3573 | 7.9048 | 996 | 0.6915 | 0.3239 | 0.6915 | 0.8316 |
| 0.3573 | 7.9206 | 998 | 0.6923 | 0.3239 | 0.6923 | 0.8320 |
| 0.0622 | 7.9365 | 1000 | 0.6903 | 0.3239 | 0.6903 | 0.8309 |
| 0.0622 | 7.9524 | 1002 | 0.6959 | 0.3286 | 0.6959 | 0.8342 |
| 0.0622 | 7.9683 | 1004 | 0.6980 | 0.3286 | 0.6980 | 0.8355 |
| 0.0622 | 7.9841 | 1006 | 0.6999 | 0.3190 | 0.6999 | 0.8366 |
| 0.0622 | 8.0 | 1008 | 0.7072 | 0.3084 | 0.7072 | 0.8410 |
| 0.0622 | 8.0159 | 1010 | 0.7119 | 0.3084 | 0.7119 | 0.8438 |
| 0.0622 | 8.0317 | 1012 | 0.7157 | 0.2869 | 0.7157 | 0.8460 |
| 0.0622 | 8.0476 | 1014 | 0.7219 | 0.2869 | 0.7219 | 0.8496 |
| 0.0622 | 8.0635 | 1016 | 0.7236 | 0.3019 | 0.7236 | 0.8506 |
| 0.0622 | 8.0794 | 1018 | 0.7217 | 0.2858 | 0.7217 | 0.8496 |
| 0.0622 | 8.0952 | 1020 | 0.7134 | 0.2708 | 0.7134 | 0.8447 |
| 0.0622 | 8.1111 | 1022 | 0.7120 | 0.2708 | 0.7120 | 0.8438 |
| 0.0622 | 8.1270 | 1024 | 0.7060 | 0.2869 | 0.7060 | 0.8403 |
| 0.0622 | 8.1429 | 1026 | 0.7044 | 0.2869 | 0.7044 | 0.8393 |
| 0.0622 | 8.1587 | 1028 | 0.6996 | 0.2869 | 0.6996 | 0.8364 |
| 0.0622 | 8.1746 | 1030 | 0.6989 | 0.2869 | 0.6989 | 0.8360 |
| 0.0622 | 8.1905 | 1032 | 0.7025 | 0.2708 | 0.7025 | 0.8382 |
| 0.0622 | 8.2063 | 1034 | 0.7011 | 0.2708 | 0.7011 | 0.8373 |
| 0.0622 | 8.2222 | 1036 | 0.6934 | 0.3030 | 0.6934 | 0.8327 |
| 0.0622 | 8.2381 | 1038 | 0.6850 | 0.3190 | 0.6850 | 0.8277 |
| 0.0622 | 8.2540 | 1040 | 0.6820 | 0.3190 | 0.6820 | 0.8258 |
| 0.0622 | 8.2698 | 1042 | 0.6854 | 0.3190 | 0.6854 | 0.8279 |
| 0.0622 | 8.2857 | 1044 | 0.6900 | 0.3190 | 0.6900 | 0.8307 |
| 0.0622 | 8.3016 | 1046 | 0.7000 | 0.3030 | 0.7000 | 0.8367 |
| 0.0622 | 8.3175 | 1048 | 0.7097 | 0.3030 | 0.7097 | 0.8424 |
| 0.0622 | 8.3333 | 1050 | 0.7174 | 0.3030 | 0.7174 | 0.8470 |
| 0.0622 | 8.3492 | 1052 | 0.7263 | 0.3030 | 0.7263 | 0.8523 |
| 0.0622 | 8.3651 | 1054 | 0.7432 | 0.2708 | 0.7432 | 0.8621 |
| 0.0622 | 8.3810 | 1056 | 0.7660 | 0.3163 | 0.7660 | 0.8752 |
| 0.0622 | 8.3968 | 1058 | 0.7813 | 0.3163 | 0.7813 | 0.8839 |
| 0.0622 | 8.4127 | 1060 | 0.7881 | 0.3163 | 0.7881 | 0.8877 |
| 0.0622 | 8.4286 | 1062 | 0.7957 | 0.3163 | 0.7957 | 0.8920 |
| 0.0622 | 8.4444 | 1064 | 0.7985 | 0.3163 | 0.7985 | 0.8936 |
| 0.0622 | 8.4603 | 1066 | 0.7943 | 0.3163 | 0.7943 | 0.8912 |
| 0.0622 | 8.4762 | 1068 | 0.7819 | 0.3163 | 0.7819 | 0.8843 |
| 0.0622 | 8.4921 | 1070 | 0.7694 | 0.3163 | 0.7694 | 0.8771 |
| 0.0622 | 8.5079 | 1072 | 0.7528 | 0.3163 | 0.7528 | 0.8676 |
| 0.0622 | 8.5238 | 1074 | 0.7383 | 0.3025 | 0.7383 | 0.8592 |
| 0.0622 | 8.5397 | 1076 | 0.7242 | 0.2973 | 0.7242 | 0.8510 |
| 0.0622 | 8.5556 | 1078 | 0.7095 | 0.2708 | 0.7094 | 0.8423 |
| 0.0622 | 8.5714 | 1080 | 0.6997 | 0.2869 | 0.6997 | 0.8365 |
| 0.0622 | 8.5873 | 1082 | 0.6938 | 0.3030 | 0.6938 | 0.8330 |
| 0.0622 | 8.6032 | 1084 | 0.6943 | 0.3030 | 0.6943 | 0.8332 |
| 0.0622 | 8.6190 | 1086 | 0.6954 | 0.3030 | 0.6954 | 0.8339 |
| 0.0622 | 8.6349 | 1088 | 0.6947 | 0.3030 | 0.6947 | 0.8335 |
| 0.0622 | 8.6508 | 1090 | 0.6986 | 0.2708 | 0.6986 | 0.8358 |
| 0.0622 | 8.6667 | 1092 | 0.7008 | 0.2708 | 0.7008 | 0.8371 |
| 0.0622 | 8.6825 | 1094 | 0.7059 | 0.2547 | 0.7059 | 0.8402 |
| 0.0622 | 8.6984 | 1096 | 0.7091 | 0.2547 | 0.7091 | 0.8421 |
| 0.0622 | 8.7143 | 1098 | 0.7116 | 0.2708 | 0.7116 | 0.8436 |
| 0.0622 | 8.7302 | 1100 | 0.7148 | 0.2708 | 0.7148 | 0.8455 |
| 0.0622 | 8.7460 | 1102 | 0.7201 | 0.2708 | 0.7201 | 0.8486 |
| 0.0622 | 8.7619 | 1104 | 0.7220 | 0.2772 | 0.7220 | 0.8497 |
| 0.0622 | 8.7778 | 1106 | 0.7227 | 0.2708 | 0.7227 | 0.8501 |
| 0.0622 | 8.7937 | 1108 | 0.7244 | 0.2708 | 0.7244 | 0.8511 |
| 0.0622 | 8.8095 | 1110 | 0.7322 | 0.2616 | 0.7322 | 0.8557 |
| 0.0622 | 8.8254 | 1112 | 0.7410 | 0.2973 | 0.7410 | 0.8608 |
| 0.0622 | 8.8413 | 1114 | 0.7467 | 0.2973 | 0.7467 | 0.8641 |
| 0.0622 | 8.8571 | 1116 | 0.7463 | 0.2973 | 0.7463 | 0.8639 |
| 0.0622 | 8.8730 | 1118 | 0.7441 | 0.2973 | 0.7441 | 0.8626 |
| 0.0622 | 8.8889 | 1120 | 0.7484 | 0.2973 | 0.7484 | 0.8651 |
| 0.0622 | 8.9048 | 1122 | 0.7545 | 0.3114 | 0.7545 | 0.8686 |
| 0.0622 | 8.9206 | 1124 | 0.7566 | 0.3114 | 0.7566 | 0.8698 |
| 0.0622 | 8.9365 | 1126 | 0.7570 | 0.3114 | 0.7570 | 0.8701 |
| 0.0622 | 8.9524 | 1128 | 0.7560 | 0.3163 | 0.7560 | 0.8695 |
| 0.0622 | 8.9683 | 1130 | 0.7541 | 0.3163 | 0.7541 | 0.8684 |
| 0.0622 | 8.9841 | 1132 | 0.7534 | 0.3163 | 0.7534 | 0.8680 |
| 0.0622 | 9.0 | 1134 | 0.7448 | 0.3025 | 0.7448 | 0.8630 |
| 0.0622 | 9.0159 | 1136 | 0.7353 | 0.2973 | 0.7353 | 0.8575 |
| 0.0622 | 9.0317 | 1138 | 0.7269 | 0.3124 | 0.7269 | 0.8526 |
| 0.0622 | 9.0476 | 1140 | 0.7238 | 0.3124 | 0.7238 | 0.8508 |
| 0.0622 | 9.0635 | 1142 | 0.7212 | 0.2772 | 0.7212 | 0.8492 |
| 0.0622 | 9.0794 | 1144 | 0.7223 | 0.2772 | 0.7223 | 0.8499 |
| 0.0622 | 9.0952 | 1146 | 0.7225 | 0.2708 | 0.7225 | 0.8500 |
| 0.0622 | 9.1111 | 1148 | 0.7188 | 0.2708 | 0.7188 | 0.8478 |
| 0.0622 | 9.1270 | 1150 | 0.7143 | 0.2869 | 0.7143 | 0.8452 |
| 0.0622 | 9.1429 | 1152 | 0.7135 | 0.2869 | 0.7135 | 0.8447 |
| 0.0622 | 9.1587 | 1154 | 0.7140 | 0.2869 | 0.7140 | 0.8450 |
| 0.0622 | 9.1746 | 1156 | 0.7158 | 0.2869 | 0.7158 | 0.8461 |
| 0.0622 | 9.1905 | 1158 | 0.7179 | 0.2708 | 0.7179 | 0.8473 |
| 0.0622 | 9.2063 | 1160 | 0.7182 | 0.2708 | 0.7182 | 0.8475 |
| 0.0622 | 9.2222 | 1162 | 0.7172 | 0.2708 | 0.7172 | 0.8468 |
| 0.0622 | 9.2381 | 1164 | 0.7157 | 0.2708 | 0.7157 | 0.8460 |
| 0.0622 | 9.2540 | 1166 | 0.7129 | 0.2869 | 0.7129 | 0.8443 |
| 0.0622 | 9.2698 | 1168 | 0.7091 | 0.2869 | 0.7091 | 0.8421 |
| 0.0622 | 9.2857 | 1170 | 0.7049 | 0.2869 | 0.7049 | 0.8396 |
| 0.0622 | 9.3016 | 1172 | 0.7011 | 0.3030 | 0.7011 | 0.8373 |
| 0.0622 | 9.3175 | 1174 | 0.6989 | 0.3030 | 0.6989 | 0.8360 |
| 0.0622 | 9.3333 | 1176 | 0.6971 | 0.3190 | 0.6971 | 0.8349 |
| 0.0622 | 9.3492 | 1178 | 0.6974 | 0.3190 | 0.6974 | 0.8351 |
| 0.0622 | 9.3651 | 1180 | 0.6994 | 0.3190 | 0.6994 | 0.8363 |
| 0.0622 | 9.3810 | 1182 | 0.7029 | 0.3030 | 0.7029 | 0.8384 |
| 0.0622 | 9.3968 | 1184 | 0.7078 | 0.2869 | 0.7078 | 0.8413 |
| 0.0622 | 9.4127 | 1186 | 0.7144 | 0.2869 | 0.7144 | 0.8452 |
| 0.0622 | 9.4286 | 1188 | 0.7208 | 0.2869 | 0.7208 | 0.8490 |
| 0.0622 | 9.4444 | 1190 | 0.7269 | 0.2708 | 0.7269 | 0.8526 |
| 0.0622 | 9.4603 | 1192 | 0.7294 | 0.2708 | 0.7294 | 0.8540 |
| 0.0622 | 9.4762 | 1194 | 0.7329 | 0.2708 | 0.7329 | 0.8561 |
| 0.0622 | 9.4921 | 1196 | 0.7382 | 0.2772 | 0.7382 | 0.8592 |
| 0.0622 | 9.5079 | 1198 | 0.7415 | 0.3124 | 0.7415 | 0.8611 |
| 0.0622 | 9.5238 | 1200 | 0.7435 | 0.3124 | 0.7435 | 0.8623 |
| 0.0622 | 9.5397 | 1202 | 0.7452 | 0.2973 | 0.7452 | 0.8633 |
| 0.0622 | 9.5556 | 1204 | 0.7469 | 0.2973 | 0.7469 | 0.8642 |
| 0.0622 | 9.5714 | 1206 | 0.7490 | 0.2973 | 0.7490 | 0.8654 |
| 0.0622 | 9.5873 | 1208 | 0.7501 | 0.2973 | 0.7501 | 0.8661 |
| 0.0622 | 9.6032 | 1210 | 0.7525 | 0.2973 | 0.7525 | 0.8675 |
| 0.0622 | 9.6190 | 1212 | 0.7520 | 0.2973 | 0.7520 | 0.8672 |
| 0.0622 | 9.6349 | 1214 | 0.7511 | 0.2973 | 0.7511 | 0.8667 |
| 0.0622 | 9.6508 | 1216 | 0.7510 | 0.2973 | 0.7510 | 0.8666 |
| 0.0622 | 9.6667 | 1218 | 0.7506 | 0.2973 | 0.7506 | 0.8664 |
| 0.0622 | 9.6825 | 1220 | 0.7507 | 0.2973 | 0.7507 | 0.8664 |
| 0.0622 | 9.6984 | 1222 | 0.7494 | 0.2973 | 0.7494 | 0.8657 |
| 0.0622 | 9.7143 | 1224 | 0.7486 | 0.2973 | 0.7486 | 0.8652 |
| 0.0622 | 9.7302 | 1226 | 0.7461 | 0.2973 | 0.7461 | 0.8638 |
| 0.0622 | 9.7460 | 1228 | 0.7425 | 0.2973 | 0.7425 | 0.8617 |
| 0.0622 | 9.7619 | 1230 | 0.7387 | 0.2973 | 0.7387 | 0.8595 |
| 0.0622 | 9.7778 | 1232 | 0.7357 | 0.2973 | 0.7357 | 0.8577 |
| 0.0622 | 9.7937 | 1234 | 0.7333 | 0.2973 | 0.7333 | 0.8563 |
| 0.0622 | 9.8095 | 1236 | 0.7324 | 0.2973 | 0.7324 | 0.8558 |
| 0.0622 | 9.8254 | 1238 | 0.7315 | 0.2973 | 0.7315 | 0.8553 |
| 0.0622 | 9.8413 | 1240 | 0.7310 | 0.2973 | 0.7310 | 0.8550 |
| 0.0622 | 9.8571 | 1242 | 0.7305 | 0.3124 | 0.7305 | 0.8547 |
| 0.0622 | 9.8730 | 1244 | 0.7300 | 0.3124 | 0.7300 | 0.8544 |
| 0.0622 | 9.8889 | 1246 | 0.7296 | 0.3124 | 0.7296 | 0.8542 |
| 0.0622 | 9.9048 | 1248 | 0.7297 | 0.3124 | 0.7297 | 0.8542 |
| 0.0622 | 9.9206 | 1250 | 0.7294 | 0.3124 | 0.7294 | 0.8540 |
| 0.0622 | 9.9365 | 1252 | 0.7291 | 0.3124 | 0.7291 | 0.8539 |
| 0.0622 | 9.9524 | 1254 | 0.7291 | 0.3124 | 0.7291 | 0.8539 |
| 0.0622 | 9.9683 | 1256 | 0.7289 | 0.3124 | 0.7289 | 0.8537 |
| 0.0622 | 9.9841 | 1258 | 0.7287 | 0.3124 | 0.7287 | 0.8536 |
| 0.0622 | 10.0 | 1260 | 0.7286 | 0.3124 | 0.7286 | 0.8536 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Dipl0/ALS_TOKEN_INSTRUCT | Dipl0 | 2024-11-26T16:50:47Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T16:49:23Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dipl0
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dcrowleymunster/roberta-finetuned-sunderlandUni2-emergency-proj | dcrowleymunster | 2024-11-26T16:49:03Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:dcrowleymunster/roberta-finetuned-sunderlandUni-emergency-proj",
"base_model:finetune:dcrowleymunster/roberta-finetuned-sunderlandUni-emergency-proj",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-11-26T12:45:11Z | ---
library_name: transformers
license: cc-by-4.0
base_model: dcrowleymunster/roberta-finetuned-sunderlandUni-emergency-proj
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-sunderlandUni2-emergency-proj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-sunderlandUni2-emergency-proj
This model is a fine-tuned version of [dcrowleymunster/roberta-finetuned-sunderlandUni-emergency-proj](https://huggingface.co/dcrowleymunster/roberta-finetuned-sunderlandUni-emergency-proj) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
armaanahuja7777/thaparChatbot | armaanahuja7777 | 2024-11-26T16:42:19Z | 42 | 0 | null | [
"safetensors",
"gpt2",
"text-generation",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"region:us"
] | text-generation | 2024-11-26T15:31:54Z | ---
language:
- en
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
--- |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k35_task2_organization_fold1 | MayBashendy | 2024-11-26T16:40:47Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T16:22:41Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k35_task2_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k35_task2_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9905
- Qwk: 0.3002
- Mse: 0.9905
- Rmse: 0.9952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0161 | 2 | 3.8502 | -0.0159 | 3.8502 | 1.9622 |
| No log | 0.0323 | 4 | 1.9094 | -0.0921 | 1.9094 | 1.3818 |
| No log | 0.0484 | 6 | 1.3686 | -0.1621 | 1.3686 | 1.1699 |
| No log | 0.0645 | 8 | 1.2448 | -0.1247 | 1.2448 | 1.1157 |
| No log | 0.0806 | 10 | 1.7480 | -0.1609 | 1.7480 | 1.3221 |
| No log | 0.0968 | 12 | 2.0540 | -0.1367 | 2.0540 | 1.4332 |
| No log | 0.1129 | 14 | 1.8231 | -0.1190 | 1.8231 | 1.3502 |
| No log | 0.1290 | 16 | 1.3808 | -0.0694 | 1.3808 | 1.1751 |
| No log | 0.1452 | 18 | 0.9932 | -0.0037 | 0.9932 | 0.9966 |
| No log | 0.1613 | 20 | 0.9153 | 0.0 | 0.9153 | 0.9567 |
| No log | 0.1774 | 22 | 1.0188 | -0.0037 | 1.0188 | 1.0094 |
| No log | 0.1935 | 24 | 1.1319 | 0.0079 | 1.1319 | 1.0639 |
| No log | 0.2097 | 26 | 1.1184 | -0.0571 | 1.1184 | 1.0576 |
| No log | 0.2258 | 28 | 1.3171 | -0.0439 | 1.3171 | 1.1476 |
| No log | 0.2419 | 30 | 1.1230 | -0.0421 | 1.1230 | 1.0597 |
| No log | 0.2581 | 32 | 1.1303 | -0.0835 | 1.1303 | 1.0632 |
| No log | 0.2742 | 34 | 1.2961 | -0.0201 | 1.2961 | 1.1385 |
| No log | 0.2903 | 36 | 1.4409 | -0.0110 | 1.4409 | 1.2004 |
| No log | 0.3065 | 38 | 1.2385 | -0.0045 | 1.2385 | 1.1129 |
| No log | 0.3226 | 40 | 0.8743 | 0.0423 | 0.8743 | 0.9350 |
| No log | 0.3387 | 42 | 0.7370 | 0.0065 | 0.7370 | 0.8585 |
| No log | 0.3548 | 44 | 0.7332 | 0.0706 | 0.7332 | 0.8563 |
| No log | 0.3710 | 46 | 0.8269 | 0.0 | 0.8269 | 0.9093 |
| No log | 0.3871 | 48 | 0.9283 | 0.0321 | 0.9283 | 0.9635 |
| No log | 0.4032 | 50 | 1.0417 | 0.0120 | 1.0417 | 1.0206 |
| No log | 0.4194 | 52 | 1.2245 | 0.0175 | 1.2245 | 1.1066 |
| No log | 0.4355 | 54 | 1.5438 | -0.0460 | 1.5438 | 1.2425 |
| No log | 0.4516 | 56 | 1.6001 | -0.0255 | 1.6001 | 1.2650 |
| No log | 0.4677 | 58 | 1.3394 | 0.0180 | 1.3394 | 1.1573 |
| No log | 0.4839 | 60 | 1.0782 | -0.0280 | 1.0782 | 1.0384 |
| No log | 0.5 | 62 | 0.7608 | 0.0308 | 0.7608 | 0.8722 |
| No log | 0.5161 | 64 | 0.6940 | 0.0292 | 0.6940 | 0.8330 |
| No log | 0.5323 | 66 | 0.6612 | 0.0960 | 0.6612 | 0.8132 |
| No log | 0.5484 | 68 | 0.6820 | 0.1360 | 0.6820 | 0.8258 |
| No log | 0.5645 | 70 | 0.8019 | 0.0 | 0.8019 | 0.8955 |
| No log | 0.5806 | 72 | 0.8832 | 0.0 | 0.8832 | 0.9398 |
| No log | 0.5968 | 74 | 0.8565 | 0.0 | 0.8565 | 0.9255 |
| No log | 0.6129 | 76 | 0.8384 | 0.0 | 0.8384 | 0.9157 |
| No log | 0.6290 | 78 | 0.8933 | 0.0 | 0.8933 | 0.9451 |
| No log | 0.6452 | 80 | 0.9239 | 0.0 | 0.9239 | 0.9612 |
| No log | 0.6613 | 82 | 0.9499 | 0.0166 | 0.9499 | 0.9746 |
| No log | 0.6774 | 84 | 0.9024 | 0.0166 | 0.9024 | 0.9500 |
| No log | 0.6935 | 86 | 0.9180 | 0.0741 | 0.9180 | 0.9581 |
| No log | 0.7097 | 88 | 0.9657 | 0.0518 | 0.9657 | 0.9827 |
| No log | 0.7258 | 90 | 0.9197 | 0.0518 | 0.9197 | 0.9590 |
| No log | 0.7419 | 92 | 0.8726 | 0.0596 | 0.8726 | 0.9341 |
| No log | 0.7581 | 94 | 0.8042 | 0.0633 | 0.8042 | 0.8968 |
| No log | 0.7742 | 96 | 0.7467 | 0.0743 | 0.7467 | 0.8641 |
| No log | 0.7903 | 98 | 0.7218 | 0.0609 | 0.7218 | 0.8496 |
| No log | 0.8065 | 100 | 0.7452 | 0.1121 | 0.7452 | 0.8633 |
| No log | 0.8226 | 102 | 0.8425 | 0.1189 | 0.8425 | 0.9179 |
| No log | 0.8387 | 104 | 0.8782 | 0.0943 | 0.8782 | 0.9371 |
| No log | 0.8548 | 106 | 0.8916 | 0.0943 | 0.8916 | 0.9442 |
| No log | 0.8710 | 108 | 0.8185 | 0.1224 | 0.8185 | 0.9047 |
| No log | 0.8871 | 110 | 0.7864 | 0.0878 | 0.7864 | 0.8868 |
| No log | 0.9032 | 112 | 0.7827 | 0.0645 | 0.7827 | 0.8847 |
| No log | 0.9194 | 114 | 0.7703 | 0.0681 | 0.7703 | 0.8777 |
| No log | 0.9355 | 116 | 0.7967 | 0.0188 | 0.7967 | 0.8926 |
| No log | 0.9516 | 118 | 0.7806 | 0.0530 | 0.7806 | 0.8835 |
| No log | 0.9677 | 120 | 0.7997 | 0.0404 | 0.7997 | 0.8943 |
| No log | 0.9839 | 122 | 0.8484 | 0.0495 | 0.8484 | 0.9211 |
| No log | 1.0 | 124 | 0.9313 | 0.0942 | 0.9313 | 0.9651 |
| No log | 1.0161 | 126 | 1.0002 | 0.0529 | 1.0002 | 1.0001 |
| No log | 1.0323 | 128 | 0.9785 | 0.0665 | 0.9785 | 0.9892 |
| No log | 1.0484 | 130 | 0.8965 | 0.0645 | 0.8965 | 0.9468 |
| No log | 1.0645 | 132 | 0.7767 | 0.1233 | 0.7767 | 0.8813 |
| No log | 1.0806 | 134 | 0.7657 | 0.2211 | 0.7657 | 0.8750 |
| No log | 1.0968 | 136 | 0.7750 | 0.1683 | 0.7750 | 0.8803 |
| No log | 1.1129 | 138 | 0.7986 | 0.1533 | 0.7986 | 0.8937 |
| No log | 1.1290 | 140 | 0.8957 | 0.0370 | 0.8957 | 0.9464 |
| No log | 1.1452 | 142 | 0.9525 | 0.0453 | 0.9525 | 0.9760 |
| No log | 1.1613 | 144 | 0.8647 | 0.1131 | 0.8647 | 0.9299 |
| No log | 1.1774 | 146 | 0.7936 | 0.1821 | 0.7936 | 0.8908 |
| No log | 1.1935 | 148 | 0.7950 | 0.0471 | 0.7950 | 0.8916 |
| No log | 1.2097 | 150 | 0.7925 | 0.0309 | 0.7925 | 0.8902 |
| No log | 1.2258 | 152 | 0.7859 | -0.0084 | 0.7859 | 0.8865 |
| No log | 1.2419 | 154 | 0.7949 | 0.1404 | 0.7949 | 0.8916 |
| No log | 1.2581 | 156 | 0.8073 | 0.1566 | 0.8073 | 0.8985 |
| No log | 1.2742 | 158 | 0.7812 | 0.1647 | 0.7812 | 0.8838 |
| No log | 1.2903 | 160 | 0.7417 | 0.2401 | 0.7417 | 0.8612 |
| No log | 1.3065 | 162 | 0.7389 | 0.0994 | 0.7389 | 0.8596 |
| No log | 1.3226 | 164 | 0.7532 | 0.1060 | 0.7532 | 0.8679 |
| No log | 1.3387 | 166 | 0.7423 | 0.0769 | 0.7423 | 0.8616 |
| No log | 1.3548 | 168 | 0.7573 | 0.0455 | 0.7573 | 0.8702 |
| No log | 1.3710 | 170 | 0.7769 | 0.0455 | 0.7769 | 0.8814 |
| No log | 1.3871 | 172 | 0.7993 | 0.2295 | 0.7993 | 0.8941 |
| No log | 1.4032 | 174 | 0.8139 | 0.2295 | 0.8139 | 0.9022 |
| No log | 1.4194 | 176 | 0.8126 | 0.1814 | 0.8126 | 0.9015 |
| No log | 1.4355 | 178 | 0.8151 | 0.1784 | 0.8151 | 0.9028 |
| No log | 1.4516 | 180 | 0.8253 | 0.2318 | 0.8253 | 0.9084 |
| No log | 1.4677 | 182 | 0.8139 | 0.2529 | 0.8139 | 0.9022 |
| No log | 1.4839 | 184 | 0.8162 | 0.1307 | 0.8162 | 0.9034 |
| No log | 1.5 | 186 | 0.8063 | 0.1307 | 0.8063 | 0.8979 |
| No log | 1.5161 | 188 | 0.7954 | 0.1307 | 0.7954 | 0.8918 |
| No log | 1.5323 | 190 | 0.7802 | 0.1307 | 0.7802 | 0.8833 |
| No log | 1.5484 | 192 | 0.7850 | 0.2529 | 0.7850 | 0.8860 |
| No log | 1.5645 | 194 | 0.7859 | 0.2529 | 0.7859 | 0.8865 |
| No log | 1.5806 | 196 | 0.7686 | 0.1599 | 0.7686 | 0.8767 |
| No log | 1.5968 | 198 | 0.7503 | 0.1781 | 0.7503 | 0.8662 |
| No log | 1.6129 | 200 | 0.7162 | 0.1974 | 0.7162 | 0.8463 |
| No log | 1.6290 | 202 | 0.7011 | 0.1915 | 0.7011 | 0.8373 |
| No log | 1.6452 | 204 | 0.7154 | 0.2048 | 0.7154 | 0.8458 |
| No log | 1.6613 | 206 | 0.7271 | 0.1856 | 0.7271 | 0.8527 |
| No log | 1.6774 | 208 | 0.7505 | 0.1921 | 0.7505 | 0.8663 |
| No log | 1.6935 | 210 | 0.7427 | 0.1695 | 0.7427 | 0.8618 |
| No log | 1.7097 | 212 | 0.7591 | 0.1695 | 0.7591 | 0.8713 |
| No log | 1.7258 | 214 | 0.7846 | 0.2162 | 0.7846 | 0.8858 |
| No log | 1.7419 | 216 | 0.8009 | 0.2775 | 0.8009 | 0.8949 |
| No log | 1.7581 | 218 | 0.7956 | 0.2775 | 0.7956 | 0.8920 |
| No log | 1.7742 | 220 | 0.7956 | 0.1935 | 0.7956 | 0.8919 |
| No log | 1.7903 | 222 | 0.7907 | 0.1628 | 0.7907 | 0.8892 |
| No log | 1.8065 | 224 | 0.7556 | 0.1446 | 0.7556 | 0.8692 |
| No log | 1.8226 | 226 | 0.7083 | 0.1053 | 0.7083 | 0.8416 |
| No log | 1.8387 | 228 | 0.7242 | 0.2237 | 0.7242 | 0.8510 |
| No log | 1.8548 | 230 | 0.6961 | 0.1989 | 0.6961 | 0.8343 |
| No log | 1.8710 | 232 | 0.6584 | 0.1849 | 0.6584 | 0.8114 |
| No log | 1.8871 | 234 | 0.6828 | 0.1909 | 0.6828 | 0.8263 |
| No log | 1.9032 | 236 | 0.6781 | 0.1683 | 0.6781 | 0.8235 |
| No log | 1.9194 | 238 | 0.6889 | 0.1435 | 0.6889 | 0.8300 |
| No log | 1.9355 | 240 | 0.7100 | 0.1435 | 0.7100 | 0.8426 |
| No log | 1.9516 | 242 | 0.7255 | 0.2089 | 0.7255 | 0.8517 |
| No log | 1.9677 | 244 | 0.7368 | 0.1935 | 0.7368 | 0.8583 |
| No log | 1.9839 | 246 | 0.7655 | 0.2246 | 0.7655 | 0.8749 |
| No log | 2.0 | 248 | 0.8070 | 0.2251 | 0.8070 | 0.8983 |
| No log | 2.0161 | 250 | 0.8007 | 0.2622 | 0.8007 | 0.8948 |
| No log | 2.0323 | 252 | 0.8148 | 0.3308 | 0.8148 | 0.9026 |
| No log | 2.0484 | 254 | 0.8545 | 0.2672 | 0.8545 | 0.9244 |
| No log | 2.0645 | 256 | 0.8146 | 0.2684 | 0.8146 | 0.9026 |
| No log | 2.0806 | 258 | 0.7733 | 0.2373 | 0.7733 | 0.8794 |
| No log | 2.0968 | 260 | 0.7899 | 0.2722 | 0.7899 | 0.8888 |
| No log | 2.1129 | 262 | 0.8166 | 0.2516 | 0.8166 | 0.9037 |
| No log | 2.1290 | 264 | 0.8691 | 0.2651 | 0.8691 | 0.9322 |
| No log | 2.1452 | 266 | 0.8967 | 0.2651 | 0.8967 | 0.9469 |
| No log | 2.1613 | 268 | 0.8889 | 0.2414 | 0.8889 | 0.9428 |
| No log | 2.1774 | 270 | 0.9334 | 0.2199 | 0.9334 | 0.9661 |
| No log | 2.1935 | 272 | 0.9570 | 0.2515 | 0.9570 | 0.9783 |
| No log | 2.2097 | 274 | 0.8848 | 0.3061 | 0.8848 | 0.9406 |
| No log | 2.2258 | 276 | 0.8355 | 0.3134 | 0.8355 | 0.9140 |
| No log | 2.2419 | 278 | 0.8004 | 0.3134 | 0.8004 | 0.8947 |
| No log | 2.2581 | 280 | 0.7925 | 0.4149 | 0.7925 | 0.8902 |
| No log | 2.2742 | 282 | 0.7875 | 0.4149 | 0.7875 | 0.8874 |
| No log | 2.2903 | 284 | 0.8513 | 0.3910 | 0.8513 | 0.9227 |
| No log | 2.3065 | 286 | 0.9223 | 0.3167 | 0.9223 | 0.9604 |
| No log | 2.3226 | 288 | 0.9897 | 0.1951 | 0.9897 | 0.9948 |
| No log | 2.3387 | 290 | 0.9580 | 0.2136 | 0.9580 | 0.9788 |
| No log | 2.3548 | 292 | 0.9009 | 0.3348 | 0.9009 | 0.9492 |
| No log | 2.3710 | 294 | 0.8313 | 0.3045 | 0.8313 | 0.9117 |
| No log | 2.3871 | 296 | 0.8171 | 0.3134 | 0.8171 | 0.9039 |
| No log | 2.4032 | 298 | 0.8221 | 0.3134 | 0.8221 | 0.9067 |
| No log | 2.4194 | 300 | 0.8767 | 0.3125 | 0.8767 | 0.9363 |
| No log | 2.4355 | 302 | 0.8830 | 0.2457 | 0.8830 | 0.9397 |
| No log | 2.4516 | 304 | 0.7961 | 0.3152 | 0.7961 | 0.8923 |
| No log | 2.4677 | 306 | 0.6965 | 0.3304 | 0.6965 | 0.8345 |
| No log | 2.4839 | 308 | 0.6554 | 0.2788 | 0.6554 | 0.8096 |
| No log | 2.5 | 310 | 0.6546 | 0.3440 | 0.6546 | 0.8091 |
| No log | 2.5161 | 312 | 0.6670 | 0.3464 | 0.6670 | 0.8167 |
| No log | 2.5323 | 314 | 0.6837 | 0.3464 | 0.6837 | 0.8269 |
| No log | 2.5484 | 316 | 0.6728 | 0.2788 | 0.6728 | 0.8202 |
| No log | 2.5645 | 318 | 0.6759 | 0.2788 | 0.6759 | 0.8221 |
| No log | 2.5806 | 320 | 0.7269 | 0.3669 | 0.7269 | 0.8526 |
| No log | 2.5968 | 322 | 0.7672 | 0.3581 | 0.7672 | 0.8759 |
| No log | 2.6129 | 324 | 0.7732 | 0.3045 | 0.7732 | 0.8793 |
| No log | 2.6290 | 326 | 0.7444 | 0.3073 | 0.7444 | 0.8628 |
| No log | 2.6452 | 328 | 0.7563 | 0.3027 | 0.7563 | 0.8697 |
| No log | 2.6613 | 330 | 0.7665 | 0.2904 | 0.7665 | 0.8755 |
| No log | 2.6774 | 332 | 0.8437 | 0.2662 | 0.8437 | 0.9185 |
| No log | 2.6935 | 334 | 0.9282 | 0.2118 | 0.9282 | 0.9634 |
| No log | 2.7097 | 336 | 0.9238 | 0.1825 | 0.9238 | 0.9611 |
| No log | 2.7258 | 338 | 0.8174 | 0.2905 | 0.8174 | 0.9041 |
| No log | 2.7419 | 340 | 0.7198 | 0.3324 | 0.7198 | 0.8484 |
| No log | 2.7581 | 342 | 0.6945 | 0.2373 | 0.6945 | 0.8333 |
| No log | 2.7742 | 344 | 0.6685 | 0.3593 | 0.6685 | 0.8176 |
| No log | 2.7903 | 346 | 0.6795 | 0.2992 | 0.6795 | 0.8243 |
| No log | 2.8065 | 348 | 0.7412 | 0.2841 | 0.7412 | 0.8609 |
| No log | 2.8226 | 350 | 0.8084 | 0.2395 | 0.8084 | 0.8991 |
| No log | 2.8387 | 352 | 0.7671 | 0.2717 | 0.7671 | 0.8758 |
| No log | 2.8548 | 354 | 0.7041 | 0.3227 | 0.7041 | 0.8391 |
| No log | 2.8710 | 356 | 0.6703 | 0.3584 | 0.6703 | 0.8187 |
| No log | 2.8871 | 358 | 0.6380 | 0.4016 | 0.6380 | 0.7988 |
| No log | 2.9032 | 360 | 0.6661 | 0.3563 | 0.6661 | 0.8162 |
| No log | 2.9194 | 362 | 0.7767 | 0.2854 | 0.7767 | 0.8813 |
| No log | 2.9355 | 364 | 0.8293 | 0.2579 | 0.8293 | 0.9107 |
| No log | 2.9516 | 366 | 0.7548 | 0.3227 | 0.7548 | 0.8688 |
| No log | 2.9677 | 368 | 0.6863 | 0.4254 | 0.6863 | 0.8284 |
| No log | 2.9839 | 370 | 0.6541 | 0.4351 | 0.6541 | 0.8087 |
| No log | 3.0 | 372 | 0.6757 | 0.3914 | 0.6757 | 0.8220 |
| No log | 3.0161 | 374 | 0.7366 | 0.4397 | 0.7366 | 0.8583 |
| No log | 3.0323 | 376 | 0.8394 | 0.2437 | 0.8394 | 0.9162 |
| No log | 3.0484 | 378 | 0.9355 | 0.2536 | 0.9355 | 0.9672 |
| No log | 3.0645 | 380 | 0.9313 | 0.2049 | 0.9313 | 0.9650 |
| No log | 3.0806 | 382 | 0.8588 | 0.2437 | 0.8588 | 0.9267 |
| No log | 3.0968 | 384 | 0.8123 | 0.3389 | 0.8123 | 0.9013 |
| No log | 3.1129 | 386 | 0.7675 | 0.3368 | 0.7675 | 0.8761 |
| No log | 3.1290 | 388 | 0.7533 | 0.2547 | 0.7533 | 0.8679 |
| No log | 3.1452 | 390 | 0.7729 | 0.3306 | 0.7729 | 0.8791 |
| No log | 3.1613 | 392 | 0.7888 | 0.3162 | 0.7888 | 0.8881 |
| No log | 3.1774 | 394 | 0.8341 | 0.2671 | 0.8341 | 0.9133 |
| No log | 3.1935 | 396 | 0.8160 | 0.2510 | 0.8160 | 0.9033 |
| No log | 3.2097 | 398 | 0.7499 | 0.2622 | 0.7499 | 0.8659 |
| No log | 3.2258 | 400 | 0.7124 | 0.2223 | 0.7124 | 0.8440 |
| No log | 3.2419 | 402 | 0.6680 | 0.2823 | 0.6680 | 0.8173 |
| No log | 3.2581 | 404 | 0.6546 | 0.2244 | 0.6546 | 0.8091 |
| No log | 3.2742 | 406 | 0.6829 | 0.2992 | 0.6829 | 0.8264 |
| No log | 3.2903 | 408 | 0.7224 | 0.2144 | 0.7224 | 0.8499 |
| No log | 3.3065 | 410 | 0.7258 | 0.2144 | 0.7258 | 0.8520 |
| No log | 3.3226 | 412 | 0.7723 | 0.2278 | 0.7723 | 0.8788 |
| No log | 3.3387 | 414 | 0.7990 | 0.2402 | 0.7990 | 0.8939 |
| No log | 3.3548 | 416 | 0.7625 | 0.2622 | 0.7625 | 0.8732 |
| No log | 3.3710 | 418 | 0.7458 | 0.2223 | 0.7458 | 0.8636 |
| No log | 3.3871 | 420 | 0.7660 | 0.1899 | 0.7660 | 0.8752 |
| No log | 3.4032 | 422 | 0.7822 | 0.2373 | 0.7822 | 0.8844 |
| No log | 3.4194 | 424 | 0.7898 | 0.2443 | 0.7898 | 0.8887 |
| No log | 3.4355 | 426 | 0.8161 | 0.2373 | 0.8161 | 0.9034 |
| No log | 3.4516 | 428 | 0.8634 | 0.1781 | 0.8634 | 0.9292 |
| No log | 3.4677 | 430 | 0.9086 | 0.1956 | 0.9086 | 0.9532 |
| No log | 3.4839 | 432 | 0.9003 | 0.1956 | 0.9003 | 0.9488 |
| No log | 3.5 | 434 | 0.8680 | 0.2626 | 0.8680 | 0.9316 |
| No log | 3.5161 | 436 | 0.8201 | 0.2540 | 0.8201 | 0.9056 |
| No log | 3.5323 | 438 | 0.7844 | 0.3218 | 0.7844 | 0.8856 |
| No log | 3.5484 | 440 | 0.7663 | 0.3241 | 0.7663 | 0.8754 |
| No log | 3.5645 | 442 | 0.7498 | 0.3064 | 0.7498 | 0.8659 |
| No log | 3.5806 | 444 | 0.7068 | 0.2617 | 0.7068 | 0.8407 |
| No log | 3.5968 | 446 | 0.6870 | 0.3280 | 0.6870 | 0.8289 |
| No log | 3.6129 | 448 | 0.7274 | 0.2328 | 0.7274 | 0.8528 |
| No log | 3.6290 | 450 | 0.8052 | 0.1627 | 0.8052 | 0.8973 |
| No log | 3.6452 | 452 | 0.8257 | 0.1627 | 0.8257 | 0.9087 |
| No log | 3.6613 | 454 | 0.7630 | 0.2548 | 0.7630 | 0.8735 |
| No log | 3.6774 | 456 | 0.7141 | 0.2377 | 0.7141 | 0.8451 |
| No log | 3.6935 | 458 | 0.6618 | 0.3091 | 0.6618 | 0.8135 |
| No log | 3.7097 | 460 | 0.6540 | 0.2992 | 0.6540 | 0.8087 |
| No log | 3.7258 | 462 | 0.6707 | 0.3162 | 0.6707 | 0.8189 |
| No log | 3.7419 | 464 | 0.7315 | 0.2533 | 0.7315 | 0.8553 |
| No log | 3.7581 | 466 | 0.8363 | 0.2527 | 0.8363 | 0.9145 |
| No log | 3.7742 | 468 | 0.9592 | 0.1744 | 0.9592 | 0.9794 |
| No log | 3.7903 | 470 | 0.9903 | 0.1583 | 0.9903 | 0.9952 |
| No log | 3.8065 | 472 | 0.9374 | 0.1768 | 0.9374 | 0.9682 |
| No log | 3.8226 | 474 | 0.8205 | 0.3040 | 0.8205 | 0.9058 |
| No log | 3.8387 | 476 | 0.7433 | 0.3631 | 0.7433 | 0.8621 |
| No log | 3.8548 | 478 | 0.7126 | 0.3612 | 0.7126 | 0.8441 |
| No log | 3.8710 | 480 | 0.7302 | 0.3763 | 0.7302 | 0.8545 |
| No log | 3.8871 | 482 | 0.7997 | 0.3275 | 0.7997 | 0.8943 |
| No log | 3.9032 | 484 | 0.8718 | 0.2964 | 0.8718 | 0.9337 |
| No log | 3.9194 | 486 | 0.9295 | 0.2875 | 0.9295 | 0.9641 |
| No log | 3.9355 | 488 | 0.8932 | 0.3232 | 0.8932 | 0.9451 |
| No log | 3.9516 | 490 | 0.8288 | 0.3178 | 0.8288 | 0.9104 |
| No log | 3.9677 | 492 | 0.7767 | 0.3491 | 0.7767 | 0.8813 |
| No log | 3.9839 | 494 | 0.7240 | 0.3631 | 0.7240 | 0.8509 |
| No log | 4.0 | 496 | 0.7207 | 0.3499 | 0.7207 | 0.8489 |
| No log | 4.0161 | 498 | 0.7253 | 0.3499 | 0.7253 | 0.8516 |
| 0.279 | 4.0323 | 500 | 0.7298 | 0.3367 | 0.7298 | 0.8543 |
| 0.279 | 4.0484 | 502 | 0.7273 | 0.3367 | 0.7273 | 0.8528 |
| 0.279 | 4.0645 | 504 | 0.7292 | 0.3216 | 0.7292 | 0.8539 |
| 0.279 | 4.0806 | 506 | 0.7243 | 0.3348 | 0.7243 | 0.8511 |
| 0.279 | 4.0968 | 508 | 0.7407 | 0.3275 | 0.7407 | 0.8606 |
| 0.279 | 4.1129 | 510 | 0.7735 | 0.3225 | 0.7735 | 0.8795 |
| 0.279 | 4.1290 | 512 | 0.7942 | 0.3225 | 0.7942 | 0.8912 |
| 0.279 | 4.1452 | 514 | 0.8825 | 0.2564 | 0.8825 | 0.9394 |
| 0.279 | 4.1613 | 516 | 0.9122 | 0.2404 | 0.9122 | 0.9551 |
| 0.279 | 4.1774 | 518 | 0.9284 | 0.2290 | 0.9284 | 0.9635 |
| 0.279 | 4.1935 | 520 | 0.8493 | 0.3418 | 0.8493 | 0.9216 |
| 0.279 | 4.2097 | 522 | 0.7796 | 0.3083 | 0.7796 | 0.8829 |
| 0.279 | 4.2258 | 524 | 0.7437 | 0.3063 | 0.7437 | 0.8624 |
| 0.279 | 4.2419 | 526 | 0.7803 | 0.3083 | 0.7803 | 0.8834 |
| 0.279 | 4.2581 | 528 | 0.8554 | 0.3215 | 0.8554 | 0.9249 |
| 0.279 | 4.2742 | 530 | 0.9346 | 0.2103 | 0.9346 | 0.9668 |
| 0.279 | 4.2903 | 532 | 0.9379 | 0.2103 | 0.9379 | 0.9684 |
| 0.279 | 4.3065 | 534 | 0.8637 | 0.3267 | 0.8637 | 0.9294 |
| 0.279 | 4.3226 | 536 | 0.7953 | 0.2883 | 0.7953 | 0.8918 |
| 0.279 | 4.3387 | 538 | 0.8111 | 0.2945 | 0.8111 | 0.9006 |
| 0.279 | 4.3548 | 540 | 0.8481 | 0.3136 | 0.8481 | 0.9209 |
| 0.279 | 4.3710 | 542 | 0.8667 | 0.3136 | 0.8667 | 0.9310 |
| 0.279 | 4.3871 | 544 | 0.8398 | 0.3080 | 0.8398 | 0.9164 |
| 0.279 | 4.4032 | 546 | 0.8306 | 0.3021 | 0.8306 | 0.9114 |
| 0.279 | 4.4194 | 548 | 0.8284 | 0.3083 | 0.8284 | 0.9101 |
| 0.279 | 4.4355 | 550 | 0.8191 | 0.3085 | 0.8191 | 0.9051 |
| 0.279 | 4.4516 | 552 | 0.8167 | 0.2651 | 0.8167 | 0.9037 |
| 0.279 | 4.4677 | 554 | 0.8259 | 0.2651 | 0.8259 | 0.9088 |
| 0.279 | 4.4839 | 556 | 0.8299 | 0.2782 | 0.8299 | 0.9110 |
| 0.279 | 4.5 | 558 | 0.8055 | 0.2782 | 0.8055 | 0.8975 |
| 0.279 | 4.5161 | 560 | 0.8248 | 0.2853 | 0.8248 | 0.9082 |
| 0.279 | 4.5323 | 562 | 0.8877 | 0.3006 | 0.8877 | 0.9422 |
| 0.279 | 4.5484 | 564 | 0.9180 | 0.2727 | 0.9180 | 0.9581 |
| 0.279 | 4.5645 | 566 | 0.9529 | 0.2360 | 0.9529 | 0.9762 |
| 0.279 | 4.5806 | 568 | 0.9230 | 0.2518 | 0.9230 | 0.9608 |
| 0.279 | 4.5968 | 570 | 0.8427 | 0.3349 | 0.8427 | 0.9180 |
| 0.279 | 4.6129 | 572 | 0.7567 | 0.3531 | 0.7567 | 0.8699 |
| 0.279 | 4.6290 | 574 | 0.7071 | 0.3480 | 0.7071 | 0.8409 |
| 0.279 | 4.6452 | 576 | 0.6729 | 0.3836 | 0.6729 | 0.8203 |
| 0.279 | 4.6613 | 578 | 0.6803 | 0.3612 | 0.6803 | 0.8248 |
| 0.279 | 4.6774 | 580 | 0.7258 | 0.3403 | 0.7258 | 0.8519 |
| 0.279 | 4.6935 | 582 | 0.8421 | 0.2591 | 0.8421 | 0.9177 |
| 0.279 | 4.7097 | 584 | 0.9633 | 0.2519 | 0.9633 | 0.9815 |
| 0.279 | 4.7258 | 586 | 1.0774 | 0.1984 | 1.0774 | 1.0380 |
| 0.279 | 4.7419 | 588 | 1.2090 | 0.2374 | 1.2090 | 1.0996 |
| 0.279 | 4.7581 | 590 | 1.2363 | 0.2374 | 1.2363 | 1.1119 |
| 0.279 | 4.7742 | 592 | 1.2077 | 0.2517 | 1.2077 | 1.0990 |
| 0.279 | 4.7903 | 594 | 1.1325 | 0.2530 | 1.1325 | 1.0642 |
| 0.279 | 4.8065 | 596 | 1.0088 | 0.2655 | 1.0088 | 1.0044 |
| 0.279 | 4.8226 | 598 | 0.9086 | 0.3059 | 0.9086 | 0.9532 |
| 0.279 | 4.8387 | 600 | 0.8116 | 0.2940 | 0.8116 | 0.9009 |
| 0.279 | 4.8548 | 602 | 0.7949 | 0.3330 | 0.7949 | 0.8915 |
| 0.279 | 4.8710 | 604 | 0.8471 | 0.2571 | 0.8471 | 0.9204 |
| 0.279 | 4.8871 | 606 | 0.8922 | 0.2714 | 0.8922 | 0.9446 |
| 0.279 | 4.9032 | 608 | 0.9051 | 0.3196 | 0.9051 | 0.9513 |
| 0.279 | 4.9194 | 610 | 0.9188 | 0.3250 | 0.9188 | 0.9586 |
| 0.279 | 4.9355 | 612 | 0.8962 | 0.3139 | 0.8962 | 0.9467 |
| 0.279 | 4.9516 | 614 | 0.8552 | 0.2998 | 0.8552 | 0.9248 |
| 0.279 | 4.9677 | 616 | 0.7871 | 0.2902 | 0.7871 | 0.8872 |
| 0.279 | 4.9839 | 618 | 0.7281 | 0.3499 | 0.7281 | 0.8533 |
| 0.279 | 5.0 | 620 | 0.7217 | 0.3290 | 0.7217 | 0.8495 |
| 0.279 | 5.0161 | 622 | 0.7666 | 0.3040 | 0.7666 | 0.8756 |
| 0.279 | 5.0323 | 624 | 0.7847 | 0.3040 | 0.7847 | 0.8858 |
| 0.279 | 5.0484 | 626 | 0.7548 | 0.3105 | 0.7548 | 0.8688 |
| 0.279 | 5.0645 | 628 | 0.7433 | 0.3105 | 0.7433 | 0.8622 |
| 0.279 | 5.0806 | 630 | 0.7051 | 0.3162 | 0.7051 | 0.8397 |
| 0.279 | 5.0968 | 632 | 0.6823 | 0.3452 | 0.6823 | 0.8260 |
| 0.279 | 5.1129 | 634 | 0.6663 | 0.3118 | 0.6663 | 0.8163 |
| 0.279 | 5.1290 | 636 | 0.6564 | 0.3045 | 0.6564 | 0.8102 |
| 0.279 | 5.1452 | 638 | 0.6764 | 0.3228 | 0.6764 | 0.8224 |
| 0.279 | 5.1613 | 640 | 0.6941 | 0.3389 | 0.6941 | 0.8331 |
| 0.279 | 5.1774 | 642 | 0.6847 | 0.3184 | 0.6847 | 0.8275 |
| 0.279 | 5.1935 | 644 | 0.6817 | 0.3328 | 0.6817 | 0.8256 |
| 0.279 | 5.2097 | 646 | 0.6813 | 0.3328 | 0.6813 | 0.8254 |
| 0.279 | 5.2258 | 648 | 0.7051 | 0.3389 | 0.7051 | 0.8397 |
| 0.279 | 5.2419 | 650 | 0.7380 | 0.2884 | 0.7380 | 0.8591 |
| 0.279 | 5.2581 | 652 | 0.7907 | 0.3152 | 0.7907 | 0.8892 |
| 0.279 | 5.2742 | 654 | 0.8118 | 0.3216 | 0.8118 | 0.9010 |
| 0.279 | 5.2903 | 656 | 0.7796 | 0.2975 | 0.7796 | 0.8829 |
| 0.279 | 5.3065 | 658 | 0.7407 | 0.2949 | 0.7407 | 0.8606 |
| 0.279 | 5.3226 | 660 | 0.7166 | 0.2325 | 0.7166 | 0.8465 |
| 0.279 | 5.3387 | 662 | 0.7040 | 0.2746 | 0.7040 | 0.8390 |
| 0.279 | 5.3548 | 664 | 0.7161 | 0.2373 | 0.7161 | 0.8462 |
| 0.279 | 5.3710 | 666 | 0.7493 | 0.2949 | 0.7493 | 0.8656 |
| 0.279 | 5.3871 | 668 | 0.7991 | 0.3152 | 0.7991 | 0.8939 |
| 0.279 | 5.4032 | 670 | 0.8601 | 0.2793 | 0.8601 | 0.9274 |
| 0.279 | 5.4194 | 672 | 0.8672 | 0.2586 | 0.8672 | 0.9312 |
| 0.279 | 5.4355 | 674 | 0.8212 | 0.2975 | 0.8212 | 0.9062 |
| 0.279 | 5.4516 | 676 | 0.7657 | 0.2934 | 0.7657 | 0.8750 |
| 0.279 | 5.4677 | 678 | 0.7363 | 0.2841 | 0.7363 | 0.8581 |
| 0.279 | 5.4839 | 680 | 0.6949 | 0.3113 | 0.6949 | 0.8336 |
| 0.279 | 5.5 | 682 | 0.6795 | 0.3249 | 0.6795 | 0.8243 |
| 0.279 | 5.5161 | 684 | 0.6888 | 0.3249 | 0.6888 | 0.8300 |
| 0.279 | 5.5323 | 686 | 0.7354 | 0.2971 | 0.7354 | 0.8576 |
| 0.279 | 5.5484 | 688 | 0.8069 | 0.3131 | 0.8069 | 0.8983 |
| 0.279 | 5.5645 | 690 | 0.8540 | 0.2372 | 0.8540 | 0.9241 |
| 0.279 | 5.5806 | 692 | 0.8605 | 0.2527 | 0.8605 | 0.9277 |
| 0.279 | 5.5968 | 694 | 0.8320 | 0.2927 | 0.8320 | 0.9121 |
| 0.279 | 5.6129 | 696 | 0.7899 | 0.2971 | 0.7899 | 0.8888 |
| 0.279 | 5.6290 | 698 | 0.7332 | 0.3290 | 0.7332 | 0.8563 |
| 0.279 | 5.6452 | 700 | 0.6889 | 0.3113 | 0.6889 | 0.8300 |
| 0.279 | 5.6613 | 702 | 0.6715 | 0.3249 | 0.6715 | 0.8194 |
| 0.279 | 5.6774 | 704 | 0.6798 | 0.3249 | 0.6798 | 0.8245 |
| 0.279 | 5.6935 | 706 | 0.7140 | 0.3113 | 0.7140 | 0.8450 |
| 0.279 | 5.7097 | 708 | 0.7977 | 0.2873 | 0.7977 | 0.8932 |
| 0.279 | 5.7258 | 710 | 0.9136 | 0.2895 | 0.9136 | 0.9558 |
| 0.279 | 5.7419 | 712 | 1.0137 | 0.2655 | 1.0137 | 1.0068 |
| 0.279 | 5.7581 | 714 | 1.0467 | 0.2274 | 1.0467 | 1.0231 |
| 0.279 | 5.7742 | 716 | 1.0555 | 0.2274 | 1.0555 | 1.0274 |
| 0.279 | 5.7903 | 718 | 1.0130 | 0.3075 | 1.0130 | 1.0065 |
| 0.279 | 5.8065 | 720 | 0.9343 | 0.2958 | 0.9343 | 0.9666 |
| 0.279 | 5.8226 | 722 | 0.8543 | 0.2953 | 0.8543 | 0.9243 |
| 0.279 | 5.8387 | 724 | 0.7873 | 0.3270 | 0.7873 | 0.8873 |
| 0.279 | 5.8548 | 726 | 0.7548 | 0.3019 | 0.7548 | 0.8688 |
| 0.279 | 5.8710 | 728 | 0.7406 | 0.2549 | 0.7406 | 0.8606 |
| 0.279 | 5.8871 | 730 | 0.7534 | 0.2414 | 0.7534 | 0.8680 |
| 0.279 | 5.9032 | 732 | 0.7756 | 0.3270 | 0.7756 | 0.8807 |
| 0.279 | 5.9194 | 734 | 0.7920 | 0.3131 | 0.7920 | 0.8899 |
| 0.279 | 5.9355 | 736 | 0.8328 | 0.3014 | 0.8328 | 0.9126 |
| 0.279 | 5.9516 | 738 | 0.8703 | 0.3036 | 0.8703 | 0.9329 |
| 0.279 | 5.9677 | 740 | 0.8741 | 0.3036 | 0.8741 | 0.9349 |
| 0.279 | 5.9839 | 742 | 0.8732 | 0.2969 | 0.8732 | 0.9344 |
| 0.279 | 6.0 | 744 | 0.8982 | 0.3036 | 0.8982 | 0.9477 |
| 0.279 | 6.0161 | 746 | 0.9149 | 0.2828 | 0.9149 | 0.9565 |
| 0.279 | 6.0323 | 748 | 0.9535 | 0.2210 | 0.9535 | 0.9765 |
| 0.279 | 6.0484 | 750 | 0.9784 | 0.2210 | 0.9784 | 0.9891 |
| 0.279 | 6.0645 | 752 | 0.9663 | 0.2958 | 0.9663 | 0.9830 |
| 0.279 | 6.0806 | 754 | 0.9398 | 0.2958 | 0.9398 | 0.9694 |
| 0.279 | 6.0968 | 756 | 0.8884 | 0.2536 | 0.8884 | 0.9425 |
| 0.279 | 6.1129 | 758 | 0.8120 | 0.3195 | 0.8120 | 0.9011 |
| 0.279 | 6.1290 | 760 | 0.7808 | 0.2863 | 0.7808 | 0.8836 |
| 0.279 | 6.1452 | 762 | 0.7744 | 0.2863 | 0.7744 | 0.8800 |
| 0.279 | 6.1613 | 764 | 0.7937 | 0.2863 | 0.7937 | 0.8909 |
| 0.279 | 6.1774 | 766 | 0.8271 | 0.3040 | 0.8271 | 0.9095 |
| 0.279 | 6.1935 | 768 | 0.8494 | 0.3040 | 0.8494 | 0.9216 |
| 0.279 | 6.2097 | 770 | 0.8566 | 0.2641 | 0.8566 | 0.9255 |
| 0.279 | 6.2258 | 772 | 0.8820 | 0.2515 | 0.8820 | 0.9391 |
| 0.279 | 6.2419 | 774 | 0.9038 | 0.2390 | 0.9038 | 0.9507 |
| 0.279 | 6.2581 | 776 | 0.9614 | 0.2385 | 0.9614 | 0.9805 |
| 0.279 | 6.2742 | 778 | 0.9924 | 0.2181 | 0.9924 | 0.9962 |
| 0.279 | 6.2903 | 780 | 1.0267 | 0.2381 | 1.0267 | 1.0133 |
| 0.279 | 6.3065 | 782 | 1.0213 | 0.2720 | 1.0213 | 1.0106 |
| 0.279 | 6.3226 | 784 | 0.9795 | 0.2368 | 0.9795 | 0.9897 |
| 0.279 | 6.3387 | 786 | 0.9211 | 0.2611 | 0.9211 | 0.9597 |
| 0.279 | 6.3548 | 788 | 0.8850 | 0.2536 | 0.8850 | 0.9408 |
| 0.279 | 6.3710 | 790 | 0.8773 | 0.2536 | 0.8773 | 0.9366 |
| 0.279 | 6.3871 | 792 | 0.8726 | 0.2390 | 0.8726 | 0.9341 |
| 0.279 | 6.4032 | 794 | 0.9082 | 0.2536 | 0.9082 | 0.9530 |
| 0.279 | 6.4194 | 796 | 0.9792 | 0.2821 | 0.9792 | 0.9895 |
| 0.279 | 6.4355 | 798 | 1.0329 | 0.2783 | 1.0329 | 1.0163 |
| 0.279 | 6.4516 | 800 | 1.0606 | 0.2505 | 1.0606 | 1.0298 |
| 0.279 | 6.4677 | 802 | 1.0724 | 0.2505 | 1.0724 | 1.0355 |
| 0.279 | 6.4839 | 804 | 1.0574 | 0.2505 | 1.0574 | 1.0283 |
| 0.279 | 6.5 | 806 | 1.0281 | 0.3129 | 1.0281 | 1.0139 |
| 0.279 | 6.5161 | 808 | 0.9703 | 0.275 | 0.9703 | 0.9850 |
| 0.279 | 6.5323 | 810 | 0.9032 | 0.2536 | 0.9032 | 0.9503 |
| 0.279 | 6.5484 | 812 | 0.8509 | 0.2641 | 0.8509 | 0.9224 |
| 0.279 | 6.5645 | 814 | 0.8298 | 0.2564 | 0.8298 | 0.9110 |
| 0.279 | 6.5806 | 816 | 0.8095 | 0.2327 | 0.8095 | 0.8997 |
| 0.279 | 6.5968 | 818 | 0.8041 | 0.2327 | 0.8041 | 0.8967 |
| 0.279 | 6.6129 | 820 | 0.8285 | 0.2349 | 0.8285 | 0.9102 |
| 0.279 | 6.6290 | 822 | 0.8582 | 0.2349 | 0.8582 | 0.9264 |
| 0.279 | 6.6452 | 824 | 0.9077 | 0.2682 | 0.9077 | 0.9527 |
| 0.279 | 6.6613 | 826 | 0.9278 | 0.2287 | 0.9278 | 0.9632 |
| 0.279 | 6.6774 | 828 | 0.9375 | 0.2287 | 0.9375 | 0.9682 |
| 0.279 | 6.6935 | 830 | 0.9564 | 0.2368 | 0.9564 | 0.9780 |
| 0.279 | 6.7097 | 832 | 0.9684 | 0.1985 | 0.9684 | 0.9841 |
| 0.279 | 6.7258 | 834 | 0.9886 | 0.2072 | 0.9886 | 0.9943 |
| 0.279 | 6.7419 | 836 | 0.9882 | 0.2072 | 0.9882 | 0.9941 |
| 0.279 | 6.7581 | 838 | 1.0064 | 0.1792 | 1.0064 | 1.0032 |
| 0.279 | 6.7742 | 840 | 1.0421 | 0.2092 | 1.0421 | 1.0208 |
| 0.279 | 6.7903 | 842 | 1.0879 | 0.2168 | 1.0879 | 1.0430 |
| 0.279 | 6.8065 | 844 | 1.0761 | 0.2505 | 1.0761 | 1.0374 |
| 0.279 | 6.8226 | 846 | 1.0141 | 0.3002 | 1.0141 | 1.0070 |
| 0.279 | 6.8387 | 848 | 0.9836 | 0.3002 | 0.9836 | 0.9918 |
| 0.279 | 6.8548 | 850 | 0.9848 | 0.3002 | 0.9848 | 0.9924 |
| 0.279 | 6.8710 | 852 | 0.9900 | 0.3002 | 0.9900 | 0.9950 |
| 0.279 | 6.8871 | 854 | 0.9547 | 0.2945 | 0.9547 | 0.9771 |
| 0.279 | 6.9032 | 856 | 0.9053 | 0.3232 | 0.9053 | 0.9515 |
| 0.279 | 6.9194 | 858 | 0.8654 | 0.3232 | 0.8654 | 0.9303 |
| 0.279 | 6.9355 | 860 | 0.8571 | 0.3232 | 0.8571 | 0.9258 |
| 0.279 | 6.9516 | 862 | 0.8741 | 0.3232 | 0.8741 | 0.9349 |
| 0.279 | 6.9677 | 864 | 0.9010 | 0.3366 | 0.9010 | 0.9492 |
| 0.279 | 6.9839 | 866 | 0.9567 | 0.3058 | 0.9567 | 0.9781 |
| 0.279 | 7.0 | 868 | 0.9851 | 0.2945 | 0.9851 | 0.9925 |
| 0.279 | 7.0161 | 870 | 0.9741 | 0.2945 | 0.9741 | 0.9869 |
| 0.279 | 7.0323 | 872 | 0.9425 | 0.2884 | 0.9425 | 0.9708 |
| 0.279 | 7.0484 | 874 | 0.9111 | 0.2884 | 0.9111 | 0.9545 |
| 0.279 | 7.0645 | 876 | 0.9058 | 0.2884 | 0.9058 | 0.9517 |
| 0.279 | 7.0806 | 878 | 0.8762 | 0.3116 | 0.8762 | 0.9360 |
| 0.279 | 7.0968 | 880 | 0.8525 | 0.3178 | 0.8525 | 0.9233 |
| 0.279 | 7.1129 | 882 | 0.8522 | 0.3232 | 0.8522 | 0.9231 |
| 0.279 | 7.1290 | 884 | 0.8664 | 0.3232 | 0.8664 | 0.9308 |
| 0.279 | 7.1452 | 886 | 0.8970 | 0.3116 | 0.8970 | 0.9471 |
| 0.279 | 7.1613 | 888 | 0.9253 | 0.275 | 0.9253 | 0.9619 |
| 0.279 | 7.1774 | 890 | 0.9198 | 0.2927 | 0.9198 | 0.9591 |
| 0.279 | 7.1935 | 892 | 0.9207 | 0.2927 | 0.9207 | 0.9596 |
| 0.279 | 7.2097 | 894 | 0.9312 | 0.2814 | 0.9312 | 0.9650 |
| 0.279 | 7.2258 | 896 | 0.9647 | 0.3075 | 0.9647 | 0.9822 |
| 0.279 | 7.2419 | 898 | 0.9680 | 0.3075 | 0.9680 | 0.9839 |
| 0.279 | 7.2581 | 900 | 0.9286 | 0.2884 | 0.9286 | 0.9636 |
| 0.279 | 7.2742 | 902 | 0.8600 | 0.2683 | 0.8600 | 0.9274 |
| 0.279 | 7.2903 | 904 | 0.8142 | 0.2995 | 0.8142 | 0.9024 |
| 0.279 | 7.3065 | 906 | 0.7841 | 0.2995 | 0.7841 | 0.8855 |
| 0.279 | 7.3226 | 908 | 0.7904 | 0.2995 | 0.7904 | 0.8890 |
| 0.279 | 7.3387 | 910 | 0.8013 | 0.2995 | 0.8013 | 0.8951 |
| 0.279 | 7.3548 | 912 | 0.8162 | 0.3060 | 0.8162 | 0.9034 |
| 0.279 | 7.3710 | 914 | 0.8339 | 0.3178 | 0.8339 | 0.9132 |
| 0.279 | 7.3871 | 916 | 0.8560 | 0.3232 | 0.8560 | 0.9252 |
| 0.279 | 7.4032 | 918 | 0.8797 | 0.2866 | 0.8797 | 0.9379 |
| 0.279 | 7.4194 | 920 | 0.8829 | 0.3232 | 0.8829 | 0.9397 |
| 0.279 | 7.4355 | 922 | 0.8811 | 0.3232 | 0.8811 | 0.9387 |
| 0.279 | 7.4516 | 924 | 0.8772 | 0.3232 | 0.8772 | 0.9366 |
| 0.279 | 7.4677 | 926 | 0.8757 | 0.3232 | 0.8757 | 0.9358 |
| 0.279 | 7.4839 | 928 | 0.8727 | 0.3232 | 0.8727 | 0.9342 |
| 0.279 | 7.5 | 930 | 0.8907 | 0.3232 | 0.8907 | 0.9438 |
| 0.279 | 7.5161 | 932 | 0.9193 | 0.2866 | 0.9193 | 0.9588 |
| 0.279 | 7.5323 | 934 | 0.9312 | 0.2884 | 0.9312 | 0.9650 |
| 0.279 | 7.5484 | 936 | 0.9268 | 0.2821 | 0.9268 | 0.9627 |
| 0.279 | 7.5645 | 938 | 0.9400 | 0.2821 | 0.9400 | 0.9695 |
| 0.279 | 7.5806 | 940 | 0.9564 | 0.2958 | 0.9564 | 0.9780 |
| 0.279 | 7.5968 | 942 | 0.9425 | 0.2958 | 0.9425 | 0.9708 |
| 0.279 | 7.6129 | 944 | 0.9410 | 0.2958 | 0.9410 | 0.9701 |
| 0.279 | 7.6290 | 946 | 0.9126 | 0.2821 | 0.9126 | 0.9553 |
| 0.279 | 7.6452 | 948 | 0.8674 | 0.2457 | 0.8674 | 0.9314 |
| 0.279 | 7.6613 | 950 | 0.8499 | 0.2457 | 0.8499 | 0.9219 |
| 0.279 | 7.6774 | 952 | 0.8343 | 0.2457 | 0.8343 | 0.9134 |
| 0.279 | 7.6935 | 954 | 0.8068 | 0.2995 | 0.8068 | 0.8982 |
| 0.279 | 7.7097 | 956 | 0.7961 | 0.2975 | 0.7961 | 0.8922 |
| 0.279 | 7.7258 | 958 | 0.7991 | 0.2975 | 0.7991 | 0.8939 |
| 0.279 | 7.7419 | 960 | 0.8291 | 0.2995 | 0.8291 | 0.9105 |
| 0.279 | 7.7581 | 962 | 0.8508 | 0.2933 | 0.8508 | 0.9224 |
| 0.279 | 7.7742 | 964 | 0.8860 | 0.2683 | 0.8860 | 0.9413 |
| 0.279 | 7.7903 | 966 | 0.9263 | 0.2814 | 0.9263 | 0.9624 |
| 0.279 | 7.8065 | 968 | 0.9681 | 0.2814 | 0.9681 | 0.9839 |
| 0.279 | 7.8226 | 970 | 0.9907 | 0.2945 | 0.9907 | 0.9953 |
| 0.279 | 7.8387 | 972 | 1.0116 | 0.3129 | 1.0116 | 1.0058 |
| 0.279 | 7.8548 | 974 | 1.0201 | 0.3129 | 1.0201 | 1.0100 |
| 0.279 | 7.8710 | 976 | 1.0047 | 0.3129 | 1.0047 | 1.0023 |
| 0.279 | 7.8871 | 978 | 0.9748 | 0.2814 | 0.9748 | 0.9873 |
| 0.279 | 7.9032 | 980 | 0.9413 | 0.2814 | 0.9413 | 0.9702 |
| 0.279 | 7.9194 | 982 | 0.9246 | 0.2814 | 0.9246 | 0.9616 |
| 0.279 | 7.9355 | 984 | 0.9251 | 0.2814 | 0.9251 | 0.9618 |
| 0.279 | 7.9516 | 986 | 0.9430 | 0.2945 | 0.9430 | 0.9711 |
| 0.279 | 7.9677 | 988 | 0.9890 | 0.3019 | 0.9890 | 0.9945 |
| 0.279 | 7.9839 | 990 | 1.0134 | 0.2673 | 1.0134 | 1.0067 |
| 0.279 | 8.0 | 992 | 1.0195 | 0.2673 | 1.0195 | 1.0097 |
| 0.279 | 8.0161 | 994 | 1.0002 | 0.2673 | 1.0002 | 1.0001 |
| 0.279 | 8.0323 | 996 | 0.9806 | 0.2962 | 0.9806 | 0.9902 |
| 0.279 | 8.0484 | 998 | 0.9524 | 0.2962 | 0.9524 | 0.9759 |
| 0.061 | 8.0645 | 1000 | 0.9344 | 0.2814 | 0.9344 | 0.9666 |
| 0.061 | 8.0806 | 1002 | 0.9155 | 0.2814 | 0.9155 | 0.9568 |
| 0.061 | 8.0968 | 1004 | 0.9048 | 0.2814 | 0.9048 | 0.9512 |
| 0.061 | 8.1129 | 1006 | 0.9055 | 0.2814 | 0.9055 | 0.9516 |
| 0.061 | 8.1290 | 1008 | 0.9155 | 0.2814 | 0.9155 | 0.9568 |
| 0.061 | 8.1452 | 1010 | 0.9482 | 0.2701 | 0.9482 | 0.9738 |
| 0.061 | 8.1613 | 1012 | 0.9626 | 0.2962 | 0.9626 | 0.9811 |
| 0.061 | 8.1774 | 1014 | 0.9811 | 0.2608 | 0.9811 | 0.9905 |
| 0.061 | 8.1935 | 1016 | 0.9795 | 0.2608 | 0.9795 | 0.9897 |
| 0.061 | 8.2097 | 1018 | 0.9604 | 0.2962 | 0.9604 | 0.9800 |
| 0.061 | 8.2258 | 1020 | 0.9379 | 0.2945 | 0.9379 | 0.9684 |
| 0.061 | 8.2419 | 1022 | 0.9294 | 0.2945 | 0.9294 | 0.9640 |
| 0.061 | 8.2581 | 1024 | 0.9165 | 0.2753 | 0.9165 | 0.9573 |
| 0.061 | 8.2742 | 1026 | 0.9004 | 0.2611 | 0.9004 | 0.9489 |
| 0.061 | 8.2903 | 1028 | 0.8909 | 0.2536 | 0.8909 | 0.9439 |
| 0.061 | 8.3065 | 1030 | 0.8910 | 0.2536 | 0.8910 | 0.9439 |
| 0.061 | 8.3226 | 1032 | 0.8871 | 0.2457 | 0.8871 | 0.9419 |
| 0.061 | 8.3387 | 1034 | 0.8807 | 0.2457 | 0.8807 | 0.9384 |
| 0.061 | 8.3548 | 1036 | 0.8676 | 0.2457 | 0.8676 | 0.9314 |
| 0.061 | 8.3710 | 1038 | 0.8548 | 0.2457 | 0.8548 | 0.9245 |
| 0.061 | 8.3871 | 1040 | 0.8540 | 0.2457 | 0.8540 | 0.9241 |
| 0.061 | 8.4032 | 1042 | 0.8626 | 0.2457 | 0.8626 | 0.9288 |
| 0.061 | 8.4194 | 1044 | 0.8656 | 0.2457 | 0.8656 | 0.9304 |
| 0.061 | 8.4355 | 1046 | 0.8608 | 0.2457 | 0.8608 | 0.9278 |
| 0.061 | 8.4516 | 1048 | 0.8473 | 0.2457 | 0.8473 | 0.9205 |
| 0.061 | 8.4677 | 1050 | 0.8381 | 0.2457 | 0.8381 | 0.9155 |
| 0.061 | 8.4839 | 1052 | 0.8226 | 0.2306 | 0.8226 | 0.9070 |
| 0.061 | 8.5 | 1054 | 0.8181 | 0.2435 | 0.8181 | 0.9045 |
| 0.061 | 8.5161 | 1056 | 0.8289 | 0.2306 | 0.8289 | 0.9104 |
| 0.061 | 8.5323 | 1058 | 0.8513 | 0.2457 | 0.8513 | 0.9226 |
| 0.061 | 8.5484 | 1060 | 0.8665 | 0.2536 | 0.8665 | 0.9309 |
| 0.061 | 8.5645 | 1062 | 0.8928 | 0.2536 | 0.8928 | 0.9449 |
| 0.061 | 8.5806 | 1064 | 0.9211 | 0.2611 | 0.9211 | 0.9597 |
| 0.061 | 8.5968 | 1066 | 0.9450 | 0.2884 | 0.9450 | 0.9721 |
| 0.061 | 8.6129 | 1068 | 0.9801 | 0.2945 | 0.9801 | 0.9900 |
| 0.061 | 8.6290 | 1070 | 1.0062 | 0.3129 | 1.0062 | 1.0031 |
| 0.061 | 8.6452 | 1072 | 1.0307 | 0.2783 | 1.0307 | 1.0152 |
| 0.061 | 8.6613 | 1074 | 1.0477 | 0.2673 | 1.0477 | 1.0236 |
| 0.061 | 8.6774 | 1076 | 1.0723 | 0.2673 | 1.0723 | 1.0355 |
| 0.061 | 8.6935 | 1078 | 1.0770 | 0.2736 | 1.0770 | 1.0378 |
| 0.061 | 8.7097 | 1080 | 1.0749 | 0.2673 | 1.0749 | 1.0368 |
| 0.061 | 8.7258 | 1082 | 1.0549 | 0.2673 | 1.0549 | 1.0271 |
| 0.061 | 8.7419 | 1084 | 1.0212 | 0.2783 | 1.0212 | 1.0105 |
| 0.061 | 8.7581 | 1086 | 0.9908 | 0.3129 | 0.9908 | 0.9954 |
| 0.061 | 8.7742 | 1088 | 0.9681 | 0.3075 | 0.9681 | 0.9839 |
| 0.061 | 8.7903 | 1090 | 0.9419 | 0.2611 | 0.9419 | 0.9705 |
| 0.061 | 8.8065 | 1092 | 0.9220 | 0.2536 | 0.9220 | 0.9602 |
| 0.061 | 8.8226 | 1094 | 0.9179 | 0.2536 | 0.9179 | 0.9581 |
| 0.061 | 8.8387 | 1096 | 0.9142 | 0.2536 | 0.9142 | 0.9562 |
| 0.061 | 8.8548 | 1098 | 0.9015 | 0.2536 | 0.9015 | 0.9495 |
| 0.061 | 8.8710 | 1100 | 0.8943 | 0.2536 | 0.8943 | 0.9457 |
| 0.061 | 8.8871 | 1102 | 0.8940 | 0.2536 | 0.8940 | 0.9455 |
| 0.061 | 8.9032 | 1104 | 0.8937 | 0.2536 | 0.8937 | 0.9454 |
| 0.061 | 8.9194 | 1106 | 0.8998 | 0.2536 | 0.8998 | 0.9486 |
| 0.061 | 8.9355 | 1108 | 0.9010 | 0.2536 | 0.9010 | 0.9492 |
| 0.061 | 8.9516 | 1110 | 0.9035 | 0.2536 | 0.9035 | 0.9505 |
| 0.061 | 8.9677 | 1112 | 0.9090 | 0.2536 | 0.9090 | 0.9534 |
| 0.061 | 8.9839 | 1114 | 0.9231 | 0.2683 | 0.9231 | 0.9608 |
| 0.061 | 9.0 | 1116 | 0.9397 | 0.2814 | 0.9397 | 0.9694 |
| 0.061 | 9.0161 | 1118 | 0.9565 | 0.2814 | 0.9565 | 0.9780 |
| 0.061 | 9.0323 | 1120 | 0.9745 | 0.3002 | 0.9745 | 0.9872 |
| 0.061 | 9.0484 | 1122 | 0.9833 | 0.3129 | 0.9833 | 0.9916 |
| 0.061 | 9.0645 | 1124 | 0.9838 | 0.3129 | 0.9838 | 0.9919 |
| 0.061 | 9.0806 | 1126 | 0.9734 | 0.3002 | 0.9734 | 0.9866 |
| 0.061 | 9.0968 | 1128 | 0.9530 | 0.2814 | 0.9530 | 0.9762 |
| 0.061 | 9.1129 | 1130 | 0.9337 | 0.2814 | 0.9337 | 0.9663 |
| 0.061 | 9.1290 | 1132 | 0.9245 | 0.2814 | 0.9245 | 0.9615 |
| 0.061 | 9.1452 | 1134 | 0.9249 | 0.2814 | 0.9249 | 0.9617 |
| 0.061 | 9.1613 | 1136 | 0.9364 | 0.2814 | 0.9364 | 0.9677 |
| 0.061 | 9.1774 | 1138 | 0.9477 | 0.2814 | 0.9477 | 0.9735 |
| 0.061 | 9.1935 | 1140 | 0.9597 | 0.2875 | 0.9597 | 0.9796 |
| 0.061 | 9.2097 | 1142 | 0.9642 | 0.2875 | 0.9642 | 0.9819 |
| 0.061 | 9.2258 | 1144 | 0.9749 | 0.3002 | 0.9749 | 0.9874 |
| 0.061 | 9.2419 | 1146 | 0.9796 | 0.3002 | 0.9796 | 0.9898 |
| 0.061 | 9.2581 | 1148 | 0.9853 | 0.3002 | 0.9853 | 0.9926 |
| 0.061 | 9.2742 | 1150 | 0.9821 | 0.3002 | 0.9821 | 0.9910 |
| 0.061 | 9.2903 | 1152 | 0.9722 | 0.2875 | 0.9722 | 0.9860 |
| 0.061 | 9.3065 | 1154 | 0.9616 | 0.2875 | 0.9616 | 0.9806 |
| 0.061 | 9.3226 | 1156 | 0.9640 | 0.2875 | 0.9640 | 0.9819 |
| 0.061 | 9.3387 | 1158 | 0.9735 | 0.2875 | 0.9735 | 0.9867 |
| 0.061 | 9.3548 | 1160 | 0.9876 | 0.3002 | 0.9876 | 0.9938 |
| 0.061 | 9.3710 | 1162 | 0.9916 | 0.3002 | 0.9916 | 0.9958 |
| 0.061 | 9.3871 | 1164 | 0.9908 | 0.3002 | 0.9908 | 0.9954 |
| 0.061 | 9.4032 | 1166 | 0.9864 | 0.3002 | 0.9864 | 0.9932 |
| 0.061 | 9.4194 | 1168 | 0.9816 | 0.2875 | 0.9816 | 0.9908 |
| 0.061 | 9.4355 | 1170 | 0.9773 | 0.2875 | 0.9773 | 0.9886 |
| 0.061 | 9.4516 | 1172 | 0.9753 | 0.2875 | 0.9753 | 0.9876 |
| 0.061 | 9.4677 | 1174 | 0.9775 | 0.2875 | 0.9775 | 0.9887 |
| 0.061 | 9.4839 | 1176 | 0.9765 | 0.2875 | 0.9765 | 0.9882 |
| 0.061 | 9.5 | 1178 | 0.9737 | 0.2875 | 0.9737 | 0.9868 |
| 0.061 | 9.5161 | 1180 | 0.9699 | 0.2875 | 0.9699 | 0.9848 |
| 0.061 | 9.5323 | 1182 | 0.9672 | 0.2875 | 0.9672 | 0.9835 |
| 0.061 | 9.5484 | 1184 | 0.9658 | 0.2875 | 0.9658 | 0.9827 |
| 0.061 | 9.5645 | 1186 | 0.9682 | 0.2875 | 0.9682 | 0.9840 |
| 0.061 | 9.5806 | 1188 | 0.9735 | 0.2875 | 0.9735 | 0.9867 |
| 0.061 | 9.5968 | 1190 | 0.9791 | 0.2875 | 0.9791 | 0.9895 |
| 0.061 | 9.6129 | 1192 | 0.9818 | 0.2875 | 0.9818 | 0.9909 |
| 0.061 | 9.6290 | 1194 | 0.9845 | 0.2875 | 0.9845 | 0.9922 |
| 0.061 | 9.6452 | 1196 | 0.9882 | 0.2875 | 0.9882 | 0.9941 |
| 0.061 | 9.6613 | 1198 | 0.9915 | 0.3002 | 0.9915 | 0.9957 |
| 0.061 | 9.6774 | 1200 | 0.9966 | 0.3002 | 0.9966 | 0.9983 |
| 0.061 | 9.6935 | 1202 | 1.0015 | 0.3002 | 1.0015 | 1.0007 |
| 0.061 | 9.7097 | 1204 | 1.0051 | 0.3002 | 1.0051 | 1.0025 |
| 0.061 | 9.7258 | 1206 | 1.0051 | 0.3002 | 1.0051 | 1.0025 |
| 0.061 | 9.7419 | 1208 | 1.0037 | 0.3002 | 1.0037 | 1.0018 |
| 0.061 | 9.7581 | 1210 | 1.0019 | 0.3002 | 1.0019 | 1.0009 |
| 0.061 | 9.7742 | 1212 | 0.9993 | 0.3002 | 0.9993 | 0.9996 |
| 0.061 | 9.7903 | 1214 | 0.9972 | 0.3002 | 0.9972 | 0.9986 |
| 0.061 | 9.8065 | 1216 | 0.9936 | 0.3002 | 0.9936 | 0.9968 |
| 0.061 | 9.8226 | 1218 | 0.9907 | 0.3002 | 0.9907 | 0.9954 |
| 0.061 | 9.8387 | 1220 | 0.9894 | 0.3002 | 0.9894 | 0.9947 |
| 0.061 | 9.8548 | 1222 | 0.9901 | 0.3002 | 0.9901 | 0.9951 |
| 0.061 | 9.8710 | 1224 | 0.9919 | 0.3002 | 0.9919 | 0.9959 |
| 0.061 | 9.8871 | 1226 | 0.9915 | 0.3002 | 0.9915 | 0.9957 |
| 0.061 | 9.9032 | 1228 | 0.9905 | 0.3002 | 0.9905 | 0.9952 |
| 0.061 | 9.9194 | 1230 | 0.9902 | 0.3002 | 0.9902 | 0.9951 |
| 0.061 | 9.9355 | 1232 | 0.9897 | 0.3002 | 0.9897 | 0.9949 |
| 0.061 | 9.9516 | 1234 | 0.9897 | 0.3002 | 0.9897 | 0.9948 |
| 0.061 | 9.9677 | 1236 | 0.9899 | 0.3002 | 0.9899 | 0.9949 |
| 0.061 | 9.9839 | 1238 | 0.9903 | 0.3002 | 0.9903 | 0.9951 |
| 0.061 | 10.0 | 1240 | 0.9905 | 0.3002 | 0.9905 | 0.9952 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
onnx-community/granite-timeseries-patchtsmixer | onnx-community | 2024-11-26T16:37:35Z | 143 | 0 | transformers.js | [
"transformers.js",
"onnx",
"patchtsmixer",
"time-series-forecasting",
"base_model:ibm-granite/granite-timeseries-patchtsmixer",
"base_model:quantized:ibm-granite/granite-timeseries-patchtsmixer",
"region:us"
] | time-series-forecasting | 2024-11-22T22:58:20Z | ---
library_name: transformers.js
base_model: ibm-granite/granite-timeseries-patchtsmixer
pipeline_tag: time-series-forecasting
---
https://huggingface.co/ibm-granite/granite-timeseries-patchtsmixer with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Time series forecasting w/ `onnx-community/granite-timeseries-patchtsmixer`
```js
import { PatchTSMixerForPrediction, Tensor } from '@huggingface/transformers';
const model_id = "onnx-community/granite-timeseries-patchtsmixer";
const model = await PatchTSMixerForPrediction.from_pretrained(model_id, { dtype: "fp32" });
const dims = [64, 512, 7];
const prod = dims.reduce((a, b) => a * b, 1);
const past_values = new Tensor('float32',
Float32Array.from({ length: prod }, (_, i) => i / prod),
dims,
);
const { prediction_outputs } = await model({ past_values });
console.log(prediction_outputs);
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
mPLUG/mPLUG-Owl3-7B-241101 | mPLUG | 2024-11-26T16:36:39Z | 3,592 | 8 | null | [
"safetensors",
"mplugowl3",
"chat",
"visual-question-answering",
"custom_code",
"en",
"arxiv:2408.04840",
"license:apache-2.0",
"region:us"
] | visual-question-answering | 2024-11-26T07:50:36Z | ---
license: apache-2.0
language:
- en
pipeline_tag: visual-question-answering
tags:
- chat
---
# mPLUG-Owl3
## Introduction
mPLUG-Owl3 is a state-of-the-art multi-modal large language model designed to tackle the challenges of long image sequence understanding. We propose Hyper Attention, which boosts the speed of long visual sequence understanding in multimodal large language models by sixfold, allowing for processing of visual sequences that are eight times longer. Meanwhile, we maintain excellent performance on single-image, multi-image, and video tasks.
Github: [mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl)
## What's new
```mPLUG-Owl3-7B-241101``` is a improved version of ```mPLUG-Owl3-7B-240728```.
### Fused Hyper Attention
mPLUG-Owl3 requires separate calculations for cross-attention and self-attention, and fuses the outputs of both through a adaptive gate. Now, we use a unified operation that only requires computing attention once.
### New template for media inputs
We now use the following format to represent the splited high-resolution images. In addition, we can now enable image splitting when the input consists of multiple images to achieve further performance benefits, which the old version of mPLUG-Owl3 was not trained to handle with this combination.
```
<|start_cut|>2*3
<|image|> <|image|> <|image|>
<|image|> <|image|> <|image|>
<|image|><|end_cut|>
```
And we use the following format to represent video.
```
<|start_video_frame|><|image|><|image|><|image|><|end_video_frame|>
```
### Adjusted media_offset
Previously, media_offset recorded the range of images each token could see. During training, since the images from multiple samples are concatenated together along the batch dimension, media_offset needed to be carefully modified, otherwise it would point to the wrong image. To prevent this, media_offset is now a List[List[int]], representing the position of each image in a sample within the batch in the original sequence. This design also makes the computation of the cross-attention mask and MI-Rope more efficient and convenient.
**All of these changes are well handled by the processor, and you don't need to change the original way of calling it.**
### High performance on video and multi-image scenario
| Model |NextQA |MVBench |VideoMME w/o sub| LongVideoBench-val| MLVU| LVBench|
|-|-|-|-|-|-|-|
| mPLUG-Owl3-7B-240728| 78.6 |54.5 |53.5 |52.1 |63.7|-|
| mPLUG-Owl3-7B-241101|82.3|59.5|59.3 |59.7|70.0|43.5|
| Model |NLVR2 |Mantis-Eval |MathVerse-mv| SciVerse-mv| BLINK |Q-Bench2|
|-|-|-|-|-|-|-|
| mPLUG-Owl3-7B-240728| 90.8 |63.1 |65.0 |86.2 |50.3 |74.0|
| mPLUG-Owl3-7B-241101|92.7|67.3|65.1 |82.7|53.8|77.7|
| Model |VQAv2 | OK-VQA | GQA | VizWizQA | TextVQA |
|-|-|-|-|-|-|
| mPLUG-Owl3-7B-240728|82.1 |60.1| 65.0| 63.5 |69.0|
| mPLUG-Owl3-7B-241101|83.2 |61.4| 64.7| 62.9 |71.4|
| Model | MMB-EN |MMB-CN |MM-Vet |POPE |AI2D|
|-|-|-|-|-|-|
| mPLUG-Owl3-7B-240728|77.6 |74.3 |40.1 |88.2 |73.8|
| mPLUG-Owl3-7B-241101|80.4 |79.1 |39.8 |88.1 |77.8|
## Quickstart
Load the mPLUG-Owl3. We now only support attn_implementation in ```['sdpa', 'flash_attention_2']```.
```Python
import torch
from modelscope import AutoConfig, AutoModel
model_path = 'iic/mPLUG-Owl3-2B-241101'
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
print(config)
model = AutoModel.from_pretrained(model_path, attn_implementation='flash_attention_2', torch_dtype=torch.bfloat16, trust_remote_code=True)
_ = model.eval().cuda()
device = "cuda"
```
Chat with images.
```Python
from PIL import Image
from modelscope import AutoTokenizer
from decord import VideoReader, cpu
tokenizer = AutoTokenizer.from_pretrained(model_path)
processor = model.init_processor(tokenizer)
image = Image.new('RGB', (500, 500), color='red')
messages = [
{"role": "user", "content": """<|image|>
Describe this image."""},
{"role": "assistant", "content": ""}
]
inputs = processor(messages, images=[image], videos=None)
inputs.to('cuda')
inputs.update({
'tokenizer': tokenizer,
'max_new_tokens':100,
'decode_text':True,
})
g = model.generate(**inputs)
print(g)
```
Chat with a video.
```Python
from PIL import Image
from modelscope import AutoTokenizer
from decord import VideoReader, cpu # pip install decord
tokenizer = AutoTokenizer.from_pretrained(model_path)
processor = model.init_processor(tokenizer)
messages = [
{"role": "user", "content": """<|video|>
Describe this video."""},
{"role": "assistant", "content": ""}
]
videos = ['/nas-mmu-data/examples/car_room.mp4']
MAX_NUM_FRAMES=16
def encode_video(video_path):
def uniform_sample(l, n):
gap = len(l) / n
idxs = [int(i * gap + gap / 2) for i in range(n)]
return [l[i] for i in idxs]
vr = VideoReader(video_path, ctx=cpu(0))
sample_fps = round(vr.get_avg_fps() / 1) # FPS
frame_idx = [i for i in range(0, len(vr), sample_fps)]
if len(frame_idx) > MAX_NUM_FRAMES:
frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
frames = vr.get_batch(frame_idx).asnumpy()
frames = [Image.fromarray(v.astype('uint8')) for v in frames]
print('num frames:', len(frames))
return frames
video_frames = [encode_video(_) for _ in videos]
inputs = processor(messages, images=None, videos=video_frames)
inputs.to(device)
inputs.update({
'tokenizer': tokenizer,
'max_new_tokens':100,
'decode_text':True,
})
g = model.generate(**inputs)
print(g)
```
### Save memory by Liger-Kernel
mPLUG-Owl3 is based on Qwen2, which can be optimized through the Liger-Kernel to reduce memory usage.
```
pip install liger-kernel
```
```python
def apply_liger_kernel_to_mplug_owl3(
rms_norm: bool = True,
swiglu: bool = True,
model = None,
) -> None:
from liger_kernel.transformers.monkey_patch import _patch_rms_norm_module
from liger_kernel.transformers.monkey_patch import _bind_method_to_module
from liger_kernel.transformers.swiglu import LigerSwiGLUMLP
"""
Apply Liger kernels to replace original implementation in HuggingFace Qwen2 models
Args:
rms_norm (bool): Whether to apply Liger's RMSNorm. Default is True.
swiglu (bool): Whether to apply Liger's SwiGLU MLP. Default is True.
model (PreTrainedModel): The model instance to apply Liger kernels to, if the model has already been
loaded. Default is None.
"""
base_model = model.language_model.model
if rms_norm:
_patch_rms_norm_module(base_model.norm)
for decoder_layer in base_model.layers:
if swiglu:
_bind_method_to_module(
decoder_layer.mlp, "forward", LigerSwiGLUMLP.forward
)
if rms_norm:
_patch_rms_norm_module(decoder_layer.input_layernorm)
_patch_rms_norm_module(decoder_layer.post_attention_layernorm)
print("Applied Liger kernels to Qwen2 in mPLUG-Owl3")
import torch
from modelscope import AutoConfig, AutoModel
model_path = 'iic/mPLUG-Owl3-2B-241101'
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
print(config)
model = AutoModel.from_pretrained(model_path, attn_implementation='flash_attention_2', torch_dtype=torch.bfloat16, trust_remote_code=True)
_ = model.eval().cuda()
device = "cuda"
apply_liger_kernel_to_mplug_owl3(model=model)
```
### Save memory by setting device_map
When you have more than one GPUs, you can set the ```device_map='auto'``` to split the mPLUG-Owl3 into multiple GPUs. However, it will slowdown the inference speed.
```python
model = AutoModel.from_pretrained(model_path, attn_implementation='flash_attention_2', device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
_ = model.eval()
first_layer_name = list(model.hf_device_map.keys())[0]
device = model.hf_device_map[first_layer_name]
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{ye2024mplugowl3longimagesequenceunderstanding,
title={mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models},
author={Jiabo Ye and Haiyang Xu and Haowei Liu and Anwen Hu and Ming Yan and Qi Qian and Ji Zhang and Fei Huang and Jingren Zhou},
year={2024},
eprint={2408.04840},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.04840},
}
```
|
imagepipeline/Arthemy-XL | imagepipeline | 2024-11-26T16:36:29Z | 45 | 0 | diffusers | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-26T16:34:40Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## Arthemy-XL
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1baef1da-9e02-4f3b-99d8-24d3dca54807/width=450/ComfyUI_01756_.jpeg" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details -
[](https://imagepipeline.io/models/Arthemy-XL?id=6311dde8-6277-4aa0-ae57-baaef5efb6d9/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "6311dde8-6277-4aa0-ae57-baaef5efb6d9",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
furrutiav/roberta_mixtral_nllfg_vanilla_cola_none_naive | furrutiav | 2024-11-26T16:28:54Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-26T16:28:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ace-in-the-hole/7k-PhoContent-200 | ace-in-the-hole | 2024-11-26T16:23:05Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T16:22:48Z | ---
library_name: transformers
license: agpl-3.0
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
model-index:
- name: 7k-PhoContent-200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 7k-PhoContent-200
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k35_task2_organization_fold0 | MayBashendy | 2024-11-26T16:22:39Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T16:06:42Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k35_task2_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k35_task2_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7497
- Qwk: 0.3412
- Mse: 0.7497
- Rmse: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0180 | 2 | 3.8128 | 0.0205 | 3.8128 | 1.9527 |
| No log | 0.0360 | 4 | 2.1527 | -0.0070 | 2.1527 | 1.4672 |
| No log | 0.0541 | 6 | 1.1146 | 0.1394 | 1.1146 | 1.0557 |
| No log | 0.0721 | 8 | 1.0417 | 0.1567 | 1.0417 | 1.0206 |
| No log | 0.0901 | 10 | 1.3831 | 0.1613 | 1.3831 | 1.1760 |
| No log | 0.1081 | 12 | 1.5211 | 0.1356 | 1.5211 | 1.2333 |
| No log | 0.1261 | 14 | 1.4415 | 0.0894 | 1.4415 | 1.2006 |
| No log | 0.1441 | 16 | 1.1977 | 0.1005 | 1.1977 | 1.0944 |
| No log | 0.1622 | 18 | 1.0622 | 0.0597 | 1.0622 | 1.0307 |
| No log | 0.1802 | 20 | 0.8327 | 0.0583 | 0.8327 | 0.9125 |
| No log | 0.1982 | 22 | 0.8532 | 0.1388 | 0.8532 | 0.9237 |
| No log | 0.2162 | 24 | 0.9412 | -0.0154 | 0.9412 | 0.9702 |
| No log | 0.2342 | 26 | 1.1767 | -0.0624 | 1.1767 | 1.0848 |
| No log | 0.2523 | 28 | 1.3747 | -0.0365 | 1.3747 | 1.1725 |
| No log | 0.2703 | 30 | 1.1310 | 0.0304 | 1.1310 | 1.0635 |
| No log | 0.2883 | 32 | 0.9043 | 0.0883 | 0.9043 | 0.9510 |
| No log | 0.3063 | 34 | 0.9238 | 0.0883 | 0.9238 | 0.9611 |
| No log | 0.3243 | 36 | 0.8921 | 0.1159 | 0.8921 | 0.9445 |
| No log | 0.3423 | 38 | 0.7907 | 0.1866 | 0.7907 | 0.8892 |
| No log | 0.3604 | 40 | 0.7262 | 0.2846 | 0.7262 | 0.8522 |
| No log | 0.3784 | 42 | 0.7385 | 0.2285 | 0.7385 | 0.8594 |
| No log | 0.3964 | 44 | 0.8023 | 0.1588 | 0.8023 | 0.8957 |
| No log | 0.4144 | 46 | 0.7884 | 0.1576 | 0.7884 | 0.8879 |
| No log | 0.4324 | 48 | 0.7306 | 0.1233 | 0.7306 | 0.8548 |
| No log | 0.4505 | 50 | 0.7495 | 0.2402 | 0.7495 | 0.8657 |
| No log | 0.4685 | 52 | 0.7428 | 0.1678 | 0.7428 | 0.8619 |
| No log | 0.4865 | 54 | 0.7452 | 0.1233 | 0.7452 | 0.8633 |
| No log | 0.5045 | 56 | 0.8098 | 0.1588 | 0.8098 | 0.8999 |
| No log | 0.5225 | 58 | 1.0083 | 0.1311 | 1.0083 | 1.0042 |
| No log | 0.5405 | 60 | 1.2121 | 0.1005 | 1.2121 | 1.1010 |
| No log | 0.5586 | 62 | 1.2780 | 0.1230 | 1.2780 | 1.1305 |
| No log | 0.5766 | 64 | 1.1854 | 0.1005 | 1.1854 | 1.0887 |
| No log | 0.5946 | 66 | 1.0563 | 0.0883 | 1.0563 | 1.0278 |
| No log | 0.6126 | 68 | 0.8912 | 0.1727 | 0.8912 | 0.9440 |
| No log | 0.6306 | 70 | 0.7692 | 0.2342 | 0.7692 | 0.8770 |
| No log | 0.6486 | 72 | 0.7055 | 0.2004 | 0.7055 | 0.8399 |
| No log | 0.6667 | 74 | 0.6948 | 0.1610 | 0.6948 | 0.8335 |
| No log | 0.6847 | 76 | 0.7028 | 0.2205 | 0.7028 | 0.8383 |
| No log | 0.7027 | 78 | 0.7322 | 0.3886 | 0.7322 | 0.8557 |
| No log | 0.7207 | 80 | 0.9886 | 0.0988 | 0.9886 | 0.9943 |
| No log | 0.7387 | 82 | 1.0959 | 0.1432 | 1.0959 | 1.0469 |
| No log | 0.7568 | 84 | 1.2266 | 0.1047 | 1.2266 | 1.1075 |
| No log | 0.7748 | 86 | 1.2303 | 0.1503 | 1.2303 | 1.1092 |
| No log | 0.7928 | 88 | 1.0598 | 0.1146 | 1.0598 | 1.0295 |
| No log | 0.8108 | 90 | 0.8579 | 0.1910 | 0.8579 | 0.9262 |
| No log | 0.8288 | 92 | 0.7357 | 0.2253 | 0.7357 | 0.8577 |
| No log | 0.8468 | 94 | 0.7139 | 0.3161 | 0.7139 | 0.8449 |
| No log | 0.8649 | 96 | 0.7574 | 0.2862 | 0.7574 | 0.8703 |
| No log | 0.8829 | 98 | 0.8223 | 0.2253 | 0.8223 | 0.9068 |
| No log | 0.9009 | 100 | 0.8905 | 0.2104 | 0.8905 | 0.9437 |
| No log | 0.9189 | 102 | 0.9292 | 0.0854 | 0.9292 | 0.9639 |
| No log | 0.9369 | 104 | 0.8478 | 0.1404 | 0.8478 | 0.9208 |
| No log | 0.9550 | 106 | 0.7897 | 0.1677 | 0.7897 | 0.8886 |
| No log | 0.9730 | 108 | 0.7573 | 0.1814 | 0.7573 | 0.8702 |
| No log | 0.9910 | 110 | 0.7295 | 0.2899 | 0.7295 | 0.8541 |
| No log | 1.0090 | 112 | 0.7871 | 0.3534 | 0.7871 | 0.8872 |
| No log | 1.0270 | 114 | 0.7858 | 0.3812 | 0.7858 | 0.8865 |
| No log | 1.0450 | 116 | 0.6670 | 0.4255 | 0.6670 | 0.8167 |
| No log | 1.0631 | 118 | 0.6023 | 0.3551 | 0.6023 | 0.7761 |
| No log | 1.0811 | 120 | 0.6497 | 0.2346 | 0.6497 | 0.8060 |
| No log | 1.0991 | 122 | 0.6334 | 0.2665 | 0.6334 | 0.7959 |
| No log | 1.1171 | 124 | 0.5882 | 0.3653 | 0.5882 | 0.7669 |
| No log | 1.1351 | 126 | 0.5813 | 0.3499 | 0.5813 | 0.7624 |
| No log | 1.1532 | 128 | 0.5932 | 0.3653 | 0.5932 | 0.7702 |
| No log | 1.1712 | 130 | 0.5819 | 0.3346 | 0.5819 | 0.7628 |
| No log | 1.1892 | 132 | 0.5705 | 0.3063 | 0.5705 | 0.7553 |
| No log | 1.2072 | 134 | 0.5657 | 0.3693 | 0.5657 | 0.7521 |
| No log | 1.2252 | 136 | 0.5743 | 0.3510 | 0.5743 | 0.7578 |
| No log | 1.2432 | 138 | 0.5988 | 0.3489 | 0.5988 | 0.7738 |
| No log | 1.2613 | 140 | 0.6229 | 0.2995 | 0.6229 | 0.7893 |
| No log | 1.2793 | 142 | 0.5951 | 0.4164 | 0.5951 | 0.7714 |
| No log | 1.2973 | 144 | 0.5974 | 0.5132 | 0.5974 | 0.7729 |
| No log | 1.3153 | 146 | 0.6558 | 0.4406 | 0.6558 | 0.8098 |
| No log | 1.3333 | 148 | 0.6640 | 0.4103 | 0.6640 | 0.8149 |
| No log | 1.3514 | 150 | 0.6464 | 0.4372 | 0.6464 | 0.8040 |
| No log | 1.3694 | 152 | 0.6766 | 0.4025 | 0.6766 | 0.8226 |
| No log | 1.3874 | 154 | 0.7311 | 0.3201 | 0.7311 | 0.8550 |
| No log | 1.4054 | 156 | 0.8348 | 0.2456 | 0.8348 | 0.9137 |
| No log | 1.4234 | 158 | 0.9137 | 0.2554 | 0.9137 | 0.9559 |
| No log | 1.4414 | 160 | 0.8943 | 0.2232 | 0.8943 | 0.9457 |
| No log | 1.4595 | 162 | 0.7897 | 0.3504 | 0.7897 | 0.8886 |
| No log | 1.4775 | 164 | 0.6922 | 0.3641 | 0.6922 | 0.8320 |
| No log | 1.4955 | 166 | 0.6697 | 0.3641 | 0.6697 | 0.8183 |
| No log | 1.5135 | 168 | 0.6543 | 0.3641 | 0.6543 | 0.8089 |
| No log | 1.5315 | 170 | 0.6915 | 0.4114 | 0.6915 | 0.8316 |
| No log | 1.5495 | 172 | 0.7581 | 0.4234 | 0.7581 | 0.8707 |
| No log | 1.5676 | 174 | 0.7520 | 0.3940 | 0.7520 | 0.8672 |
| No log | 1.5856 | 176 | 0.7271 | 0.3173 | 0.7271 | 0.8527 |
| No log | 1.6036 | 178 | 0.7106 | 0.2680 | 0.7106 | 0.8430 |
| No log | 1.6216 | 180 | 0.6183 | 0.3350 | 0.6183 | 0.7863 |
| No log | 1.6396 | 182 | 0.5564 | 0.3806 | 0.5564 | 0.7460 |
| No log | 1.6577 | 184 | 0.5530 | 0.4287 | 0.5530 | 0.7436 |
| No log | 1.6757 | 186 | 0.5680 | 0.3950 | 0.5680 | 0.7537 |
| No log | 1.6937 | 188 | 0.5956 | 0.3786 | 0.5956 | 0.7717 |
| No log | 1.7117 | 190 | 0.6842 | 0.2142 | 0.6842 | 0.8271 |
| No log | 1.7297 | 192 | 0.7144 | 0.1975 | 0.7144 | 0.8452 |
| No log | 1.7477 | 194 | 0.7009 | 0.2629 | 0.7009 | 0.8372 |
| No log | 1.7658 | 196 | 0.6607 | 0.3457 | 0.6607 | 0.8128 |
| No log | 1.7838 | 198 | 0.6547 | 0.3978 | 0.6547 | 0.8091 |
| No log | 1.8018 | 200 | 0.6134 | 0.3709 | 0.6134 | 0.7832 |
| No log | 1.8198 | 202 | 0.6114 | 0.3867 | 0.6114 | 0.7819 |
| No log | 1.8378 | 204 | 0.6122 | 0.3346 | 0.6122 | 0.7824 |
| No log | 1.8559 | 206 | 0.6298 | 0.3642 | 0.6298 | 0.7936 |
| No log | 1.8739 | 208 | 0.7222 | 0.3339 | 0.7222 | 0.8498 |
| No log | 1.8919 | 210 | 0.7162 | 0.2880 | 0.7162 | 0.8463 |
| No log | 1.9099 | 212 | 0.6663 | 0.2995 | 0.6663 | 0.8163 |
| No log | 1.9279 | 214 | 0.6356 | 0.3160 | 0.6356 | 0.7972 |
| No log | 1.9459 | 216 | 0.6893 | 0.3149 | 0.6893 | 0.8302 |
| No log | 1.9640 | 218 | 0.7682 | 0.2795 | 0.7682 | 0.8765 |
| No log | 1.9820 | 220 | 0.8246 | 0.2629 | 0.8246 | 0.9081 |
| No log | 2.0 | 222 | 0.7628 | 0.2795 | 0.7628 | 0.8734 |
| No log | 2.0180 | 224 | 0.7160 | 0.2961 | 0.7160 | 0.8462 |
| No log | 2.0360 | 226 | 0.6778 | 0.3489 | 0.6778 | 0.8233 |
| No log | 2.0541 | 228 | 0.6358 | 0.3339 | 0.6358 | 0.7974 |
| No log | 2.0721 | 230 | 0.7294 | 0.2962 | 0.7294 | 0.8540 |
| No log | 2.0901 | 232 | 0.7215 | 0.2296 | 0.7215 | 0.8494 |
| No log | 2.1081 | 234 | 0.6226 | 0.2463 | 0.6226 | 0.7890 |
| No log | 2.1261 | 236 | 0.5498 | 0.3960 | 0.5498 | 0.7415 |
| No log | 2.1441 | 238 | 0.5644 | 0.3467 | 0.5644 | 0.7513 |
| No log | 2.1622 | 240 | 0.6675 | 0.2629 | 0.6675 | 0.8170 |
| No log | 2.1802 | 242 | 0.9309 | 0.2524 | 0.9309 | 0.9648 |
| No log | 2.1982 | 244 | 1.1172 | 0.2509 | 1.1172 | 1.0570 |
| No log | 2.2162 | 246 | 1.0851 | 0.3005 | 1.0851 | 1.0417 |
| No log | 2.2342 | 248 | 0.8880 | 0.3065 | 0.8880 | 0.9423 |
| No log | 2.2523 | 250 | 0.7647 | 0.3179 | 0.7647 | 0.8744 |
| No log | 2.2703 | 252 | 0.7516 | 0.3019 | 0.7516 | 0.8670 |
| No log | 2.2883 | 254 | 0.7828 | 0.2463 | 0.7828 | 0.8848 |
| No log | 2.3063 | 256 | 0.8334 | 0.2129 | 0.8334 | 0.9129 |
| No log | 2.3243 | 258 | 0.8149 | 0.1473 | 0.8149 | 0.9027 |
| No log | 2.3423 | 260 | 0.7896 | 0.1473 | 0.7896 | 0.8886 |
| No log | 2.3604 | 262 | 0.7761 | 0.1473 | 0.7761 | 0.8810 |
| No log | 2.3784 | 264 | 0.7433 | 0.1640 | 0.7433 | 0.8621 |
| No log | 2.3964 | 266 | 0.6698 | 0.2463 | 0.6698 | 0.8184 |
| No log | 2.4144 | 268 | 0.6239 | 0.2984 | 0.6239 | 0.7899 |
| No log | 2.4324 | 270 | 0.6378 | 0.2807 | 0.6378 | 0.7986 |
| No log | 2.4505 | 272 | 0.7215 | 0.2296 | 0.7215 | 0.8494 |
| No log | 2.4685 | 274 | 0.8352 | 0.1427 | 0.8352 | 0.9139 |
| No log | 2.4865 | 276 | 0.9405 | 0.2211 | 0.9405 | 0.9698 |
| No log | 2.5045 | 278 | 0.8781 | 0.2356 | 0.8781 | 0.9371 |
| No log | 2.5225 | 280 | 0.7593 | 0.2364 | 0.7593 | 0.8714 |
| No log | 2.5405 | 282 | 0.7046 | 0.2296 | 0.7046 | 0.8394 |
| No log | 2.5586 | 284 | 0.6498 | 0.2795 | 0.6498 | 0.8061 |
| No log | 2.5766 | 286 | 0.6194 | 0.3127 | 0.6194 | 0.7870 |
| No log | 2.5946 | 288 | 0.6273 | 0.3127 | 0.6273 | 0.7921 |
| No log | 2.6126 | 290 | 0.6632 | 0.2961 | 0.6632 | 0.8144 |
| No log | 2.6306 | 292 | 0.7027 | 0.2629 | 0.7027 | 0.8383 |
| No log | 2.6486 | 294 | 0.6783 | 0.3179 | 0.6783 | 0.8236 |
| No log | 2.6667 | 296 | 0.6228 | 0.3201 | 0.6228 | 0.7892 |
| No log | 2.6847 | 298 | 0.6008 | 0.3201 | 0.6008 | 0.7751 |
| No log | 2.7027 | 300 | 0.6051 | 0.3201 | 0.6051 | 0.7779 |
| No log | 2.7207 | 302 | 0.6680 | 0.3030 | 0.6680 | 0.8173 |
| No log | 2.7387 | 304 | 0.7869 | 0.2536 | 0.7869 | 0.8871 |
| No log | 2.7568 | 306 | 0.8823 | 0.2738 | 0.8823 | 0.9393 |
| No log | 2.7748 | 308 | 0.8308 | 0.2129 | 0.8308 | 0.9115 |
| No log | 2.7928 | 310 | 0.7397 | 0.2547 | 0.7397 | 0.8601 |
| No log | 2.8108 | 312 | 0.6940 | 0.2869 | 0.6940 | 0.8331 |
| No log | 2.8288 | 314 | 0.7429 | 0.2463 | 0.7429 | 0.8619 |
| No log | 2.8468 | 316 | 0.7912 | 0.2374 | 0.7912 | 0.8895 |
| No log | 2.8649 | 318 | 0.8657 | 0.3104 | 0.8657 | 0.9304 |
| No log | 2.8829 | 320 | 0.9190 | 0.3005 | 0.9190 | 0.9586 |
| No log | 2.9009 | 322 | 0.9321 | 0.2952 | 0.9321 | 0.9654 |
| No log | 2.9189 | 324 | 0.9002 | 0.2524 | 0.9002 | 0.9488 |
| No log | 2.9369 | 326 | 0.8175 | 0.2296 | 0.8175 | 0.9042 |
| No log | 2.9550 | 328 | 0.7502 | 0.2463 | 0.7502 | 0.8661 |
| No log | 2.9730 | 330 | 0.6839 | 0.2869 | 0.6839 | 0.8270 |
| No log | 2.9910 | 332 | 0.6555 | 0.2869 | 0.6555 | 0.8097 |
| No log | 3.0090 | 334 | 0.6374 | 0.3030 | 0.6374 | 0.7984 |
| No log | 3.0270 | 336 | 0.6554 | 0.2708 | 0.6554 | 0.8096 |
| No log | 3.0450 | 338 | 0.7280 | 0.2697 | 0.7280 | 0.8532 |
| No log | 3.0631 | 340 | 0.7831 | 0.2536 | 0.7831 | 0.8850 |
| No log | 3.0811 | 342 | 0.8075 | 0.3448 | 0.8075 | 0.8986 |
| No log | 3.0991 | 344 | 0.7734 | 0.3595 | 0.7734 | 0.8794 |
| No log | 3.1171 | 346 | 0.6120 | 0.4170 | 0.6120 | 0.7823 |
| No log | 3.1351 | 348 | 0.5322 | 0.3987 | 0.5322 | 0.7295 |
| No log | 3.1532 | 350 | 0.5078 | 0.4621 | 0.5078 | 0.7126 |
| No log | 3.1712 | 352 | 0.5221 | 0.4613 | 0.5221 | 0.7226 |
| No log | 3.1892 | 354 | 0.6159 | 0.3030 | 0.6159 | 0.7848 |
| No log | 3.2072 | 356 | 0.6868 | 0.2917 | 0.6868 | 0.8287 |
| No log | 3.2252 | 358 | 0.7146 | 0.2129 | 0.7146 | 0.8454 |
| No log | 3.2432 | 360 | 0.6774 | 0.2547 | 0.6774 | 0.8230 |
| No log | 3.2613 | 362 | 0.6274 | 0.3190 | 0.6274 | 0.7921 |
| No log | 3.2793 | 364 | 0.5947 | 0.3190 | 0.5947 | 0.7712 |
| No log | 3.2973 | 366 | 0.6158 | 0.3190 | 0.6158 | 0.7847 |
| No log | 3.3153 | 368 | 0.6510 | 0.3030 | 0.6510 | 0.8068 |
| No log | 3.3333 | 370 | 0.7170 | 0.2129 | 0.7170 | 0.8468 |
| No log | 3.3514 | 372 | 0.7871 | 0.1962 | 0.7871 | 0.8872 |
| No log | 3.3694 | 374 | 0.7959 | 0.1962 | 0.7959 | 0.8921 |
| No log | 3.3874 | 376 | 0.7558 | 0.2129 | 0.7558 | 0.8693 |
| No log | 3.4054 | 378 | 0.7177 | 0.2129 | 0.7177 | 0.8471 |
| No log | 3.4234 | 380 | 0.6971 | 0.2129 | 0.6971 | 0.8349 |
| No log | 3.4414 | 382 | 0.6809 | 0.3019 | 0.6809 | 0.8252 |
| No log | 3.4595 | 384 | 0.6761 | 0.3030 | 0.6761 | 0.8222 |
| No log | 3.4775 | 386 | 0.7022 | 0.3019 | 0.7022 | 0.8380 |
| No log | 3.4955 | 388 | 0.6606 | 0.3030 | 0.6606 | 0.8128 |
| No log | 3.5135 | 390 | 0.6032 | 0.3190 | 0.6032 | 0.7766 |
| No log | 3.5315 | 392 | 0.5870 | 0.3350 | 0.5870 | 0.7662 |
| No log | 3.5495 | 394 | 0.6200 | 0.3190 | 0.6200 | 0.7874 |
| No log | 3.5676 | 396 | 0.6951 | 0.2547 | 0.6951 | 0.8337 |
| No log | 3.5856 | 398 | 0.7573 | 0.2536 | 0.7573 | 0.8703 |
| No log | 3.6036 | 400 | 0.7615 | 0.2536 | 0.7615 | 0.8726 |
| No log | 3.6216 | 402 | 0.7091 | 0.2536 | 0.7091 | 0.8421 |
| No log | 3.6396 | 404 | 0.6615 | 0.2547 | 0.6615 | 0.8134 |
| No log | 3.6577 | 406 | 0.6485 | 0.2869 | 0.6485 | 0.8053 |
| No log | 3.6757 | 408 | 0.7049 | 0.2129 | 0.7049 | 0.8396 |
| No log | 3.6937 | 410 | 0.7804 | 0.2129 | 0.7804 | 0.8834 |
| No log | 3.7117 | 412 | 0.7970 | 0.2129 | 0.7970 | 0.8928 |
| No log | 3.7297 | 414 | 0.7299 | 0.2296 | 0.7299 | 0.8544 |
| No log | 3.7477 | 416 | 0.6568 | 0.3030 | 0.6568 | 0.8105 |
| No log | 3.7658 | 418 | 0.6485 | 0.3190 | 0.6485 | 0.8053 |
| No log | 3.7838 | 420 | 0.6910 | 0.2697 | 0.6910 | 0.8313 |
| No log | 3.8018 | 422 | 0.7658 | 0.2212 | 0.7658 | 0.8751 |
| No log | 3.8198 | 424 | 0.8436 | 0.2212 | 0.8436 | 0.9185 |
| No log | 3.8378 | 426 | 0.8984 | 0.1988 | 0.8984 | 0.9478 |
| No log | 3.8559 | 428 | 0.8661 | 0.2129 | 0.8661 | 0.9306 |
| No log | 3.8739 | 430 | 0.7807 | 0.2129 | 0.7807 | 0.8836 |
| No log | 3.8919 | 432 | 0.7045 | 0.2296 | 0.7045 | 0.8393 |
| No log | 3.9099 | 434 | 0.6660 | 0.2475 | 0.6660 | 0.8161 |
| No log | 3.9279 | 436 | 0.6558 | 0.2641 | 0.6558 | 0.8098 |
| No log | 3.9459 | 438 | 0.6719 | 0.2309 | 0.6719 | 0.8197 |
| No log | 3.9640 | 440 | 0.7322 | 0.2374 | 0.7322 | 0.8557 |
| No log | 3.9820 | 442 | 0.8473 | 0.2778 | 0.8473 | 0.9205 |
| No log | 4.0 | 444 | 0.9233 | 0.2935 | 0.9233 | 0.9609 |
| No log | 4.0180 | 446 | 0.9443 | 0.3516 | 0.9443 | 0.9717 |
| No log | 4.0360 | 448 | 0.8768 | 0.2885 | 0.8768 | 0.9364 |
| No log | 4.0541 | 450 | 0.8221 | 0.2364 | 0.8221 | 0.9067 |
| No log | 4.0721 | 452 | 0.7652 | 0.2129 | 0.7652 | 0.8747 |
| No log | 4.0901 | 454 | 0.7460 | 0.2129 | 0.7460 | 0.8637 |
| No log | 4.1081 | 456 | 0.7622 | 0.2129 | 0.7622 | 0.8730 |
| No log | 4.1261 | 458 | 0.8182 | 0.2129 | 0.8182 | 0.9046 |
| No log | 4.1441 | 460 | 0.8820 | 0.1988 | 0.8820 | 0.9391 |
| No log | 4.1622 | 462 | 0.8837 | 0.1988 | 0.8837 | 0.9400 |
| No log | 4.1802 | 464 | 0.8128 | 0.2129 | 0.8128 | 0.9016 |
| No log | 4.1982 | 466 | 0.7129 | 0.2296 | 0.7129 | 0.8443 |
| No log | 4.2162 | 468 | 0.6444 | 0.2463 | 0.6444 | 0.8028 |
| No log | 4.2342 | 470 | 0.6199 | 0.2463 | 0.6199 | 0.7874 |
| No log | 4.2523 | 472 | 0.6403 | 0.2463 | 0.6403 | 0.8002 |
| No log | 4.2703 | 474 | 0.6689 | 0.2463 | 0.6689 | 0.8179 |
| No log | 4.2883 | 476 | 0.6937 | 0.2296 | 0.6937 | 0.8329 |
| No log | 4.3063 | 478 | 0.7675 | 0.2129 | 0.7675 | 0.8761 |
| No log | 4.3243 | 480 | 0.8053 | 0.2212 | 0.8053 | 0.8974 |
| No log | 4.3423 | 482 | 0.8405 | 0.2593 | 0.8405 | 0.9168 |
| No log | 4.3604 | 484 | 0.8279 | 0.2593 | 0.8279 | 0.9099 |
| No log | 4.3784 | 486 | 0.8611 | 0.2800 | 0.8611 | 0.9280 |
| No log | 4.3964 | 488 | 0.8772 | 0.3143 | 0.8772 | 0.9366 |
| No log | 4.4144 | 490 | 0.9202 | 0.2601 | 0.9202 | 0.9593 |
| No log | 4.4324 | 492 | 0.9551 | 0.2946 | 0.9551 | 0.9773 |
| No log | 4.4505 | 494 | 0.9667 | 0.2900 | 0.9667 | 0.9832 |
| No log | 4.4685 | 496 | 0.8883 | 0.2857 | 0.8883 | 0.9425 |
| No log | 4.4865 | 498 | 0.8095 | 0.2857 | 0.8095 | 0.8997 |
| 0.3433 | 4.5045 | 500 | 0.7896 | 0.2506 | 0.7896 | 0.8886 |
| 0.3433 | 4.5225 | 502 | 0.8128 | 0.2506 | 0.8128 | 0.9016 |
| 0.3433 | 4.5405 | 504 | 0.8720 | 0.2506 | 0.8720 | 0.9338 |
| 0.3433 | 4.5586 | 506 | 0.9158 | 0.2912 | 0.9158 | 0.9570 |
| 0.3433 | 4.5766 | 508 | 0.9118 | 0.2912 | 0.9118 | 0.9549 |
| 0.3433 | 4.5946 | 510 | 0.9150 | 0.2912 | 0.9150 | 0.9566 |
| 0.3433 | 4.6126 | 512 | 0.8812 | 0.3234 | 0.8812 | 0.9387 |
| 0.3433 | 4.6306 | 514 | 0.8594 | 0.3234 | 0.8594 | 0.9271 |
| 0.3433 | 4.6486 | 516 | 0.8408 | 0.3190 | 0.8408 | 0.9170 |
| 0.3433 | 4.6667 | 518 | 0.8324 | 0.3190 | 0.8324 | 0.9124 |
| 0.3433 | 4.6847 | 520 | 0.8702 | 0.3190 | 0.8702 | 0.9329 |
| 0.3433 | 4.7027 | 522 | 0.9313 | 0.3316 | 0.9313 | 0.9650 |
| 0.3433 | 4.7207 | 524 | 0.9342 | 0.3540 | 0.9342 | 0.9666 |
| 0.3433 | 4.7387 | 526 | 0.8888 | 0.2857 | 0.8888 | 0.9427 |
| 0.3433 | 4.7568 | 528 | 0.8142 | 0.2800 | 0.8142 | 0.9023 |
| 0.3433 | 4.7748 | 530 | 0.7549 | 0.2362 | 0.7549 | 0.8688 |
| 0.3433 | 4.7928 | 532 | 0.7095 | 0.1975 | 0.7095 | 0.8423 |
| 0.3433 | 4.8108 | 534 | 0.7030 | 0.1808 | 0.7030 | 0.8384 |
| 0.3433 | 4.8288 | 536 | 0.7278 | 0.2212 | 0.7278 | 0.8531 |
| 0.3433 | 4.8468 | 538 | 0.7709 | 0.2800 | 0.7709 | 0.8780 |
| 0.3433 | 4.8649 | 540 | 0.8233 | 0.2857 | 0.8233 | 0.9074 |
| 0.3433 | 4.8829 | 542 | 0.8534 | 0.2857 | 0.8534 | 0.9238 |
| 0.3433 | 4.9009 | 544 | 0.8631 | 0.3190 | 0.8631 | 0.9290 |
| 0.3433 | 4.9189 | 546 | 0.8376 | 0.2857 | 0.8376 | 0.9152 |
| 0.3433 | 4.9369 | 548 | 0.8359 | 0.2857 | 0.8359 | 0.9143 |
| 0.3433 | 4.9550 | 550 | 0.8323 | 0.2857 | 0.8323 | 0.9123 |
| 0.3433 | 4.9730 | 552 | 0.7685 | 0.2593 | 0.7685 | 0.8766 |
| 0.3433 | 4.9910 | 554 | 0.7158 | 0.2374 | 0.7158 | 0.8460 |
| 0.3433 | 5.0090 | 556 | 0.6551 | 0.2708 | 0.6551 | 0.8094 |
| 0.3433 | 5.0270 | 558 | 0.6190 | 0.3384 | 0.6190 | 0.7868 |
| 0.3433 | 5.0450 | 560 | 0.6253 | 0.3577 | 0.6253 | 0.7907 |
| 0.3433 | 5.0631 | 562 | 0.6885 | 0.3114 | 0.6885 | 0.8298 |
| 0.3433 | 5.0811 | 564 | 0.7961 | 0.3421 | 0.7961 | 0.8922 |
| 0.3433 | 5.0991 | 566 | 0.9034 | 0.2783 | 0.9034 | 0.9505 |
| 0.3433 | 5.1171 | 568 | 0.9215 | 0.2783 | 0.9215 | 0.9600 |
| 0.3433 | 5.1351 | 570 | 0.8738 | 0.2732 | 0.8738 | 0.9348 |
| 0.3433 | 5.1532 | 572 | 0.8631 | 0.2861 | 0.8631 | 0.9290 |
| 0.3433 | 5.1712 | 574 | 0.8401 | 0.3548 | 0.8401 | 0.9166 |
| 0.3433 | 5.1892 | 576 | 0.7723 | 0.3343 | 0.7723 | 0.8788 |
| 0.3433 | 5.2072 | 578 | 0.6819 | 0.3114 | 0.6819 | 0.8258 |
| 0.3433 | 5.2252 | 580 | 0.6305 | 0.3019 | 0.6305 | 0.7940 |
| 0.3433 | 5.2432 | 582 | 0.6151 | 0.3499 | 0.6151 | 0.7843 |
| 0.3433 | 5.2613 | 584 | 0.6263 | 0.3073 | 0.6263 | 0.7914 |
| 0.3433 | 5.2793 | 586 | 0.6635 | 0.3266 | 0.6635 | 0.8145 |
| 0.3433 | 5.2973 | 588 | 0.7208 | 0.3383 | 0.7208 | 0.8490 |
| 0.3433 | 5.3153 | 590 | 0.8046 | 0.3412 | 0.8046 | 0.8970 |
| 0.3433 | 5.3333 | 592 | 0.8648 | 0.2802 | 0.8648 | 0.9300 |
| 0.3433 | 5.3514 | 594 | 0.8899 | 0.2802 | 0.8899 | 0.9434 |
| 0.3433 | 5.3694 | 596 | 0.8837 | 0.2802 | 0.8837 | 0.9400 |
| 0.3433 | 5.3874 | 598 | 0.8495 | 0.3276 | 0.8495 | 0.9217 |
| 0.3433 | 5.4054 | 600 | 0.8135 | 0.3276 | 0.8135 | 0.9020 |
| 0.3433 | 5.4234 | 602 | 0.7818 | 0.3412 | 0.7818 | 0.8842 |
| 0.3433 | 5.4414 | 604 | 0.7915 | 0.3412 | 0.7915 | 0.8897 |
| 0.3433 | 5.4595 | 606 | 0.7972 | 0.3412 | 0.7972 | 0.8929 |
| 0.3433 | 5.4775 | 608 | 0.8072 | 0.2935 | 0.8072 | 0.8985 |
| 0.3433 | 5.4955 | 610 | 0.7917 | 0.3412 | 0.7917 | 0.8898 |
| 0.3433 | 5.5135 | 612 | 0.8104 | 0.3412 | 0.8104 | 0.9002 |
| 0.3433 | 5.5315 | 614 | 0.8542 | 0.3276 | 0.8542 | 0.9243 |
| 0.3433 | 5.5495 | 616 | 0.8912 | 0.2802 | 0.8912 | 0.9440 |
| 0.3433 | 5.5676 | 618 | 0.9097 | 0.3103 | 0.9097 | 0.9538 |
| 0.3433 | 5.5856 | 620 | 0.8879 | 0.2802 | 0.8879 | 0.9423 |
| 0.3433 | 5.6036 | 622 | 0.8583 | 0.3234 | 0.8583 | 0.9264 |
| 0.3433 | 5.6216 | 624 | 0.8273 | 0.3190 | 0.8273 | 0.9096 |
| 0.3433 | 5.6396 | 626 | 0.8223 | 0.3190 | 0.8223 | 0.9068 |
| 0.3433 | 5.6577 | 628 | 0.8250 | 0.3234 | 0.8250 | 0.9083 |
| 0.3433 | 5.6757 | 630 | 0.7929 | 0.3374 | 0.7929 | 0.8904 |
| 0.3433 | 5.6937 | 632 | 0.7404 | 0.3005 | 0.7404 | 0.8604 |
| 0.3433 | 5.7117 | 634 | 0.7203 | 0.3104 | 0.7203 | 0.8487 |
| 0.3433 | 5.7297 | 636 | 0.7190 | 0.3104 | 0.7190 | 0.8479 |
| 0.3433 | 5.7477 | 638 | 0.7381 | 0.3104 | 0.7381 | 0.8591 |
| 0.3433 | 5.7658 | 640 | 0.7719 | 0.3374 | 0.7719 | 0.8786 |
| 0.3433 | 5.7838 | 642 | 0.8215 | 0.3234 | 0.8215 | 0.9063 |
| 0.3433 | 5.8018 | 644 | 0.8398 | 0.3234 | 0.8398 | 0.9164 |
| 0.3433 | 5.8198 | 646 | 0.8687 | 0.3012 | 0.8687 | 0.9320 |
| 0.3433 | 5.8378 | 648 | 0.8630 | 0.3059 | 0.8630 | 0.9290 |
| 0.3433 | 5.8559 | 650 | 0.8177 | 0.3374 | 0.8177 | 0.9043 |
| 0.3433 | 5.8739 | 652 | 0.7927 | 0.3056 | 0.7927 | 0.8903 |
| 0.3433 | 5.8919 | 654 | 0.7791 | 0.3056 | 0.7791 | 0.8826 |
| 0.3433 | 5.9099 | 656 | 0.7526 | 0.3056 | 0.7526 | 0.8675 |
| 0.3433 | 5.9279 | 658 | 0.7402 | 0.3153 | 0.7402 | 0.8603 |
| 0.3433 | 5.9459 | 660 | 0.7286 | 0.3153 | 0.7286 | 0.8536 |
| 0.3433 | 5.9640 | 662 | 0.7384 | 0.3153 | 0.7384 | 0.8593 |
| 0.3433 | 5.9820 | 664 | 0.7467 | 0.3005 | 0.7467 | 0.8641 |
| 0.3433 | 6.0 | 666 | 0.7619 | 0.3056 | 0.7619 | 0.8729 |
| 0.3433 | 6.0180 | 668 | 0.8046 | 0.3234 | 0.8046 | 0.8970 |
| 0.3433 | 6.0360 | 670 | 0.8289 | 0.3234 | 0.8289 | 0.9104 |
| 0.3433 | 6.0541 | 672 | 0.8205 | 0.3234 | 0.8205 | 0.9058 |
| 0.3433 | 6.0721 | 674 | 0.8146 | 0.3234 | 0.8146 | 0.9025 |
| 0.3433 | 6.0901 | 676 | 0.8282 | 0.3234 | 0.8282 | 0.9101 |
| 0.3433 | 6.1081 | 678 | 0.8427 | 0.3234 | 0.8427 | 0.9180 |
| 0.3433 | 6.1261 | 680 | 0.8504 | 0.3234 | 0.8504 | 0.9222 |
| 0.3433 | 6.1441 | 682 | 0.8864 | 0.3059 | 0.8864 | 0.9415 |
| 0.3433 | 6.1622 | 684 | 0.9262 | 0.2900 | 0.9262 | 0.9624 |
| 0.3433 | 6.1802 | 686 | 0.9369 | 0.2990 | 0.9369 | 0.9679 |
| 0.3433 | 6.1982 | 688 | 0.9517 | 0.2990 | 0.9517 | 0.9755 |
| 0.3433 | 6.2162 | 690 | 0.9064 | 0.2900 | 0.9064 | 0.9521 |
| 0.3433 | 6.2342 | 692 | 0.8394 | 0.3374 | 0.8394 | 0.9162 |
| 0.3433 | 6.2523 | 694 | 0.7926 | 0.3514 | 0.7926 | 0.8903 |
| 0.3433 | 6.2703 | 696 | 0.7591 | 0.3514 | 0.7591 | 0.8713 |
| 0.3433 | 6.2883 | 698 | 0.7486 | 0.3153 | 0.7486 | 0.8652 |
| 0.3433 | 6.3063 | 700 | 0.7519 | 0.2952 | 0.7519 | 0.8671 |
| 0.3433 | 6.3243 | 702 | 0.7539 | 0.2952 | 0.7539 | 0.8683 |
| 0.3433 | 6.3423 | 704 | 0.7716 | 0.3374 | 0.7716 | 0.8784 |
| 0.3433 | 6.3604 | 706 | 0.7922 | 0.3234 | 0.7922 | 0.8901 |
| 0.3433 | 6.3784 | 708 | 0.7905 | 0.3374 | 0.7905 | 0.8891 |
| 0.3433 | 6.3964 | 710 | 0.7742 | 0.3333 | 0.7742 | 0.8799 |
| 0.3433 | 6.4144 | 712 | 0.7715 | 0.3333 | 0.7715 | 0.8784 |
| 0.3433 | 6.4324 | 714 | 0.7809 | 0.3333 | 0.7809 | 0.8837 |
| 0.3433 | 6.4505 | 716 | 0.8061 | 0.3333 | 0.8061 | 0.8978 |
| 0.3433 | 6.4685 | 718 | 0.8254 | 0.3190 | 0.8254 | 0.9085 |
| 0.3433 | 6.4865 | 720 | 0.8256 | 0.3190 | 0.8256 | 0.9086 |
| 0.3433 | 6.5045 | 722 | 0.8143 | 0.3190 | 0.8143 | 0.9024 |
| 0.3433 | 6.5225 | 724 | 0.7938 | 0.3190 | 0.7938 | 0.8910 |
| 0.3433 | 6.5405 | 726 | 0.7871 | 0.3190 | 0.7871 | 0.8872 |
| 0.3433 | 6.5586 | 728 | 0.7605 | 0.3477 | 0.7605 | 0.8721 |
| 0.3433 | 6.5766 | 730 | 0.7620 | 0.3477 | 0.7620 | 0.8729 |
| 0.3433 | 6.5946 | 732 | 0.7593 | 0.3477 | 0.7593 | 0.8714 |
| 0.3433 | 6.6126 | 734 | 0.7401 | 0.3477 | 0.7401 | 0.8603 |
| 0.3433 | 6.6306 | 736 | 0.7263 | 0.3477 | 0.7263 | 0.8522 |
| 0.3433 | 6.6486 | 738 | 0.7208 | 0.3153 | 0.7208 | 0.8490 |
| 0.3433 | 6.6667 | 740 | 0.7382 | 0.3477 | 0.7382 | 0.8592 |
| 0.3433 | 6.6847 | 742 | 0.7536 | 0.3333 | 0.7536 | 0.8681 |
| 0.3433 | 6.7027 | 744 | 0.7676 | 0.3412 | 0.7676 | 0.8761 |
| 0.3433 | 6.7207 | 746 | 0.7719 | 0.3412 | 0.7719 | 0.8786 |
| 0.3433 | 6.7387 | 748 | 0.7676 | 0.3412 | 0.7676 | 0.8761 |
| 0.3433 | 6.7568 | 750 | 0.7700 | 0.3412 | 0.7700 | 0.8775 |
| 0.3433 | 6.7748 | 752 | 0.7431 | 0.3548 | 0.7431 | 0.8620 |
| 0.3433 | 6.7928 | 754 | 0.6925 | 0.3153 | 0.6925 | 0.8321 |
| 0.3433 | 6.8108 | 756 | 0.6785 | 0.3448 | 0.6785 | 0.8237 |
| 0.3433 | 6.8288 | 758 | 0.6822 | 0.3653 | 0.6822 | 0.8260 |
| 0.3433 | 6.8468 | 760 | 0.6891 | 0.3653 | 0.6891 | 0.8301 |
| 0.3433 | 6.8649 | 762 | 0.6943 | 0.3653 | 0.6943 | 0.8332 |
| 0.3433 | 6.8829 | 764 | 0.6763 | 0.3792 | 0.6763 | 0.8224 |
| 0.3433 | 6.9009 | 766 | 0.6880 | 0.3653 | 0.6880 | 0.8295 |
| 0.3433 | 6.9189 | 768 | 0.7360 | 0.3548 | 0.7360 | 0.8579 |
| 0.3433 | 6.9369 | 770 | 0.8011 | 0.3194 | 0.8011 | 0.8950 |
| 0.3433 | 6.9550 | 772 | 0.9053 | 0.3295 | 0.9053 | 0.9514 |
| 0.3433 | 6.9730 | 774 | 0.9719 | 0.3361 | 0.9719 | 0.9859 |
| 0.3433 | 6.9910 | 776 | 0.9705 | 0.3361 | 0.9705 | 0.9851 |
| 0.3433 | 7.0090 | 778 | 0.9157 | 0.3260 | 0.9157 | 0.9569 |
| 0.3433 | 7.0270 | 780 | 0.8476 | 0.3103 | 0.8476 | 0.9207 |
| 0.3433 | 7.0450 | 782 | 0.8027 | 0.3059 | 0.8027 | 0.8959 |
| 0.3433 | 7.0631 | 784 | 0.7649 | 0.3374 | 0.7649 | 0.8746 |
| 0.3433 | 7.0811 | 786 | 0.7316 | 0.3374 | 0.7316 | 0.8553 |
| 0.3433 | 7.0991 | 788 | 0.7358 | 0.3374 | 0.7358 | 0.8578 |
| 0.3433 | 7.1171 | 790 | 0.7683 | 0.3412 | 0.7683 | 0.8765 |
| 0.3433 | 7.1351 | 792 | 0.8274 | 0.3103 | 0.8274 | 0.9096 |
| 0.3433 | 7.1532 | 794 | 0.8561 | 0.3390 | 0.8561 | 0.9253 |
| 0.3433 | 7.1712 | 796 | 0.8576 | 0.3390 | 0.8576 | 0.9261 |
| 0.3433 | 7.1892 | 798 | 0.8344 | 0.3390 | 0.8344 | 0.9134 |
| 0.3433 | 7.2072 | 800 | 0.8095 | 0.3192 | 0.8095 | 0.8997 |
| 0.3433 | 7.2252 | 802 | 0.7926 | 0.3412 | 0.7926 | 0.8903 |
| 0.3433 | 7.2432 | 804 | 0.7601 | 0.3412 | 0.7601 | 0.8718 |
| 0.3433 | 7.2613 | 806 | 0.7541 | 0.3412 | 0.7541 | 0.8684 |
| 0.3433 | 7.2793 | 808 | 0.7283 | 0.3412 | 0.7283 | 0.8534 |
| 0.3433 | 7.2973 | 810 | 0.6933 | 0.2720 | 0.6933 | 0.8327 |
| 0.3433 | 7.3153 | 812 | 0.6732 | 0.2658 | 0.6732 | 0.8205 |
| 0.3433 | 7.3333 | 814 | 0.6790 | 0.2658 | 0.6790 | 0.8240 |
| 0.3433 | 7.3514 | 816 | 0.7148 | 0.3056 | 0.7148 | 0.8455 |
| 0.3433 | 7.3694 | 818 | 0.7636 | 0.3706 | 0.7636 | 0.8738 |
| 0.3433 | 7.3874 | 820 | 0.8118 | 0.3354 | 0.8118 | 0.9010 |
| 0.3433 | 7.4054 | 822 | 0.8566 | 0.2946 | 0.8566 | 0.9255 |
| 0.3433 | 7.4234 | 824 | 0.8803 | 0.3260 | 0.8803 | 0.9382 |
| 0.3433 | 7.4414 | 826 | 0.8686 | 0.3378 | 0.8686 | 0.9320 |
| 0.3433 | 7.4595 | 828 | 0.8698 | 0.3378 | 0.8698 | 0.9326 |
| 0.3433 | 7.4775 | 830 | 0.8752 | 0.3378 | 0.8752 | 0.9355 |
| 0.3433 | 7.4955 | 832 | 0.8701 | 0.2990 | 0.8701 | 0.9328 |
| 0.3433 | 7.5135 | 834 | 0.8644 | 0.2946 | 0.8644 | 0.9297 |
| 0.3433 | 7.5315 | 836 | 0.8440 | 0.3390 | 0.8440 | 0.9187 |
| 0.3433 | 7.5495 | 838 | 0.8058 | 0.3390 | 0.8058 | 0.8977 |
| 0.3433 | 7.5676 | 840 | 0.7719 | 0.3573 | 0.7719 | 0.8786 |
| 0.3433 | 7.5856 | 842 | 0.7484 | 0.3276 | 0.7484 | 0.8651 |
| 0.3433 | 7.6036 | 844 | 0.7265 | 0.3333 | 0.7265 | 0.8523 |
| 0.3433 | 7.6216 | 846 | 0.7215 | 0.3333 | 0.7215 | 0.8494 |
| 0.3433 | 7.6396 | 848 | 0.7350 | 0.3374 | 0.7350 | 0.8573 |
| 0.3433 | 7.6577 | 850 | 0.7610 | 0.3276 | 0.7610 | 0.8723 |
| 0.3433 | 7.6757 | 852 | 0.7962 | 0.3354 | 0.7962 | 0.8923 |
| 0.3433 | 7.6937 | 854 | 0.8177 | 0.3390 | 0.8177 | 0.9042 |
| 0.3433 | 7.7117 | 856 | 0.8293 | 0.3390 | 0.8293 | 0.9107 |
| 0.3433 | 7.7297 | 858 | 0.8235 | 0.3390 | 0.8235 | 0.9075 |
| 0.3433 | 7.7477 | 860 | 0.8215 | 0.3390 | 0.8215 | 0.9064 |
| 0.3433 | 7.7658 | 862 | 0.8062 | 0.3390 | 0.8062 | 0.8979 |
| 0.3433 | 7.7838 | 864 | 0.7831 | 0.3516 | 0.7831 | 0.8850 |
| 0.3433 | 7.8018 | 866 | 0.7464 | 0.3412 | 0.7464 | 0.8639 |
| 0.3433 | 7.8198 | 868 | 0.7472 | 0.3581 | 0.7472 | 0.8644 |
| 0.3433 | 7.8378 | 870 | 0.7588 | 0.3581 | 0.7588 | 0.8711 |
| 0.3433 | 7.8559 | 872 | 0.7578 | 0.3581 | 0.7578 | 0.8705 |
| 0.3433 | 7.8739 | 874 | 0.7604 | 0.3581 | 0.7604 | 0.8720 |
| 0.3433 | 7.8919 | 876 | 0.7501 | 0.3581 | 0.7501 | 0.8661 |
| 0.3433 | 7.9099 | 878 | 0.7380 | 0.3581 | 0.7380 | 0.8591 |
| 0.3433 | 7.9279 | 880 | 0.7306 | 0.3548 | 0.7306 | 0.8548 |
| 0.3433 | 7.9459 | 882 | 0.7192 | 0.3548 | 0.7192 | 0.8480 |
| 0.3433 | 7.9640 | 884 | 0.7203 | 0.3548 | 0.7203 | 0.8487 |
| 0.3433 | 7.9820 | 886 | 0.7319 | 0.3548 | 0.7319 | 0.8555 |
| 0.3433 | 8.0 | 888 | 0.7537 | 0.3412 | 0.7537 | 0.8682 |
| 0.3433 | 8.0180 | 890 | 0.7799 | 0.3276 | 0.7799 | 0.8831 |
| 0.3433 | 8.0360 | 892 | 0.7904 | 0.3276 | 0.7904 | 0.8891 |
| 0.3433 | 8.0541 | 894 | 0.7909 | 0.3276 | 0.7909 | 0.8894 |
| 0.3433 | 8.0721 | 896 | 0.7881 | 0.3276 | 0.7881 | 0.8877 |
| 0.3433 | 8.0901 | 898 | 0.7920 | 0.3316 | 0.7920 | 0.8899 |
| 0.3433 | 8.1081 | 900 | 0.7899 | 0.3449 | 0.7899 | 0.8888 |
| 0.3433 | 8.1261 | 902 | 0.7862 | 0.3449 | 0.7862 | 0.8867 |
| 0.3433 | 8.1441 | 904 | 0.7775 | 0.3581 | 0.7775 | 0.8818 |
| 0.3433 | 8.1622 | 906 | 0.7870 | 0.3362 | 0.7870 | 0.8871 |
| 0.3433 | 8.1802 | 908 | 0.7838 | 0.3643 | 0.7838 | 0.8853 |
| 0.3433 | 8.1982 | 910 | 0.7591 | 0.3581 | 0.7591 | 0.8713 |
| 0.3433 | 8.2162 | 912 | 0.7503 | 0.3581 | 0.7503 | 0.8662 |
| 0.3433 | 8.2342 | 914 | 0.7471 | 0.3581 | 0.7471 | 0.8644 |
| 0.3433 | 8.2523 | 916 | 0.7594 | 0.3581 | 0.7594 | 0.8715 |
| 0.3433 | 8.2703 | 918 | 0.7747 | 0.3643 | 0.7747 | 0.8802 |
| 0.3433 | 8.2883 | 920 | 0.7682 | 0.3581 | 0.7682 | 0.8765 |
| 0.3433 | 8.3063 | 922 | 0.7693 | 0.3581 | 0.7693 | 0.8771 |
| 0.3433 | 8.3243 | 924 | 0.7513 | 0.3548 | 0.7513 | 0.8668 |
| 0.3433 | 8.3423 | 926 | 0.7185 | 0.3243 | 0.7185 | 0.8476 |
| 0.3433 | 8.3604 | 928 | 0.6878 | 0.2922 | 0.6878 | 0.8294 |
| 0.3433 | 8.3784 | 930 | 0.6655 | 0.2810 | 0.6655 | 0.8158 |
| 0.3433 | 8.3964 | 932 | 0.6563 | 0.2750 | 0.6563 | 0.8101 |
| 0.3433 | 8.4144 | 934 | 0.6578 | 0.2750 | 0.6578 | 0.8111 |
| 0.3433 | 8.4324 | 936 | 0.6651 | 0.2750 | 0.6651 | 0.8155 |
| 0.3433 | 8.4505 | 938 | 0.6833 | 0.2810 | 0.6833 | 0.8266 |
| 0.3433 | 8.4685 | 940 | 0.6959 | 0.2810 | 0.6959 | 0.8342 |
| 0.3433 | 8.4865 | 942 | 0.7179 | 0.2571 | 0.7179 | 0.8473 |
| 0.3433 | 8.5045 | 944 | 0.7523 | 0.2912 | 0.7523 | 0.8674 |
| 0.3433 | 8.5225 | 946 | 0.7790 | 0.2963 | 0.7790 | 0.8826 |
| 0.3433 | 8.5405 | 948 | 0.7827 | 0.2963 | 0.7827 | 0.8847 |
| 0.3433 | 8.5586 | 950 | 0.7837 | 0.2963 | 0.7837 | 0.8852 |
| 0.3433 | 8.5766 | 952 | 0.7742 | 0.2963 | 0.7742 | 0.8799 |
| 0.3433 | 8.5946 | 954 | 0.7655 | 0.2963 | 0.7655 | 0.8749 |
| 0.3433 | 8.6126 | 956 | 0.7586 | 0.2963 | 0.7586 | 0.8710 |
| 0.3433 | 8.6306 | 958 | 0.7567 | 0.2963 | 0.7567 | 0.8699 |
| 0.3433 | 8.6486 | 960 | 0.7470 | 0.2912 | 0.7470 | 0.8643 |
| 0.3433 | 8.6667 | 962 | 0.7375 | 0.2912 | 0.7375 | 0.8588 |
| 0.3433 | 8.6847 | 964 | 0.7244 | 0.3005 | 0.7244 | 0.8511 |
| 0.3433 | 8.7027 | 966 | 0.7196 | 0.3005 | 0.7196 | 0.8483 |
| 0.3433 | 8.7207 | 968 | 0.7098 | 0.3005 | 0.7098 | 0.8425 |
| 0.3433 | 8.7387 | 970 | 0.6925 | 0.2658 | 0.6925 | 0.8322 |
| 0.3433 | 8.7568 | 972 | 0.6828 | 0.2810 | 0.6828 | 0.8263 |
| 0.3433 | 8.7748 | 974 | 0.6738 | 0.2810 | 0.6738 | 0.8209 |
| 0.3433 | 8.7928 | 976 | 0.6756 | 0.2810 | 0.6756 | 0.8219 |
| 0.3433 | 8.8108 | 978 | 0.6880 | 0.3153 | 0.6880 | 0.8295 |
| 0.3433 | 8.8288 | 980 | 0.6998 | 0.3005 | 0.6998 | 0.8365 |
| 0.3433 | 8.8468 | 982 | 0.7186 | 0.3005 | 0.7186 | 0.8477 |
| 0.3433 | 8.8649 | 984 | 0.7418 | 0.2963 | 0.7418 | 0.8613 |
| 0.3433 | 8.8829 | 986 | 0.7697 | 0.3276 | 0.7697 | 0.8773 |
| 0.3433 | 8.9009 | 988 | 0.7924 | 0.3276 | 0.7924 | 0.8902 |
| 0.3433 | 8.9189 | 990 | 0.8091 | 0.3573 | 0.8091 | 0.8995 |
| 0.3433 | 8.9369 | 992 | 0.8169 | 0.3573 | 0.8169 | 0.9038 |
| 0.3433 | 8.9550 | 994 | 0.8195 | 0.3573 | 0.8195 | 0.9052 |
| 0.3433 | 8.9730 | 996 | 0.8226 | 0.3573 | 0.8226 | 0.9070 |
| 0.3433 | 8.9910 | 998 | 0.8142 | 0.3573 | 0.8142 | 0.9024 |
| 0.059 | 9.0090 | 1000 | 0.8103 | 0.3573 | 0.8103 | 0.9002 |
| 0.059 | 9.0270 | 1002 | 0.8001 | 0.3573 | 0.8001 | 0.8945 |
| 0.059 | 9.0450 | 1004 | 0.7776 | 0.3276 | 0.7776 | 0.8818 |
| 0.059 | 9.0631 | 1006 | 0.7566 | 0.3412 | 0.7566 | 0.8698 |
| 0.059 | 9.0811 | 1008 | 0.7430 | 0.3103 | 0.7430 | 0.8620 |
| 0.059 | 9.0991 | 1010 | 0.7303 | 0.3056 | 0.7303 | 0.8546 |
| 0.059 | 9.1171 | 1012 | 0.7193 | 0.3056 | 0.7193 | 0.8481 |
| 0.059 | 9.1351 | 1014 | 0.7099 | 0.3056 | 0.7099 | 0.8426 |
| 0.059 | 9.1532 | 1016 | 0.7061 | 0.3056 | 0.7061 | 0.8403 |
| 0.059 | 9.1712 | 1018 | 0.7093 | 0.3056 | 0.7093 | 0.8422 |
| 0.059 | 9.1892 | 1020 | 0.7164 | 0.3056 | 0.7164 | 0.8464 |
| 0.059 | 9.2072 | 1022 | 0.7233 | 0.3103 | 0.7233 | 0.8505 |
| 0.059 | 9.2252 | 1024 | 0.7333 | 0.3412 | 0.7333 | 0.8564 |
| 0.059 | 9.2432 | 1026 | 0.7473 | 0.3276 | 0.7473 | 0.8644 |
| 0.059 | 9.2613 | 1028 | 0.7568 | 0.3276 | 0.7568 | 0.8700 |
| 0.059 | 9.2793 | 1030 | 0.7608 | 0.3276 | 0.7608 | 0.8722 |
| 0.059 | 9.2973 | 1032 | 0.7628 | 0.3276 | 0.7628 | 0.8734 |
| 0.059 | 9.3153 | 1034 | 0.7673 | 0.3276 | 0.7673 | 0.8760 |
| 0.059 | 9.3333 | 1036 | 0.7660 | 0.3276 | 0.7660 | 0.8752 |
| 0.059 | 9.3514 | 1038 | 0.7611 | 0.3276 | 0.7611 | 0.8724 |
| 0.059 | 9.3694 | 1040 | 0.7613 | 0.3276 | 0.7613 | 0.8725 |
| 0.059 | 9.3874 | 1042 | 0.7613 | 0.3276 | 0.7613 | 0.8725 |
| 0.059 | 9.4054 | 1044 | 0.7662 | 0.3276 | 0.7662 | 0.8754 |
| 0.059 | 9.4234 | 1046 | 0.7676 | 0.3276 | 0.7676 | 0.8761 |
| 0.059 | 9.4414 | 1048 | 0.7694 | 0.3276 | 0.7694 | 0.8771 |
| 0.059 | 9.4595 | 1050 | 0.7751 | 0.3276 | 0.7751 | 0.8804 |
| 0.059 | 9.4775 | 1052 | 0.7867 | 0.3573 | 0.7867 | 0.8870 |
| 0.059 | 9.4955 | 1054 | 0.8018 | 0.3573 | 0.8018 | 0.8954 |
| 0.059 | 9.5135 | 1056 | 0.8113 | 0.3573 | 0.8113 | 0.9007 |
| 0.059 | 9.5315 | 1058 | 0.8109 | 0.3573 | 0.8109 | 0.9005 |
| 0.059 | 9.5495 | 1060 | 0.8089 | 0.3573 | 0.8089 | 0.8994 |
| 0.059 | 9.5676 | 1062 | 0.8053 | 0.3573 | 0.8053 | 0.8974 |
| 0.059 | 9.5856 | 1064 | 0.8026 | 0.3573 | 0.8026 | 0.8959 |
| 0.059 | 9.6036 | 1066 | 0.7987 | 0.3573 | 0.7987 | 0.8937 |
| 0.059 | 9.6216 | 1068 | 0.7970 | 0.3573 | 0.7970 | 0.8927 |
| 0.059 | 9.6396 | 1070 | 0.7926 | 0.3573 | 0.7926 | 0.8903 |
| 0.059 | 9.6577 | 1072 | 0.7868 | 0.3276 | 0.7868 | 0.8870 |
| 0.059 | 9.6757 | 1074 | 0.7811 | 0.3412 | 0.7811 | 0.8838 |
| 0.059 | 9.6937 | 1076 | 0.7755 | 0.3412 | 0.7755 | 0.8806 |
| 0.059 | 9.7117 | 1078 | 0.7700 | 0.3412 | 0.7700 | 0.8775 |
| 0.059 | 9.7297 | 1080 | 0.7630 | 0.3412 | 0.7630 | 0.8735 |
| 0.059 | 9.7477 | 1082 | 0.7577 | 0.3412 | 0.7577 | 0.8705 |
| 0.059 | 9.7658 | 1084 | 0.7567 | 0.3412 | 0.7567 | 0.8699 |
| 0.059 | 9.7838 | 1086 | 0.7558 | 0.3412 | 0.7558 | 0.8694 |
| 0.059 | 9.8018 | 1088 | 0.7559 | 0.3412 | 0.7559 | 0.8694 |
| 0.059 | 9.8198 | 1090 | 0.7551 | 0.3412 | 0.7551 | 0.8689 |
| 0.059 | 9.8378 | 1092 | 0.7539 | 0.3412 | 0.7539 | 0.8683 |
| 0.059 | 9.8559 | 1094 | 0.7525 | 0.3412 | 0.7525 | 0.8675 |
| 0.059 | 9.8739 | 1096 | 0.7520 | 0.3412 | 0.7520 | 0.8672 |
| 0.059 | 9.8919 | 1098 | 0.7513 | 0.3412 | 0.7513 | 0.8668 |
| 0.059 | 9.9099 | 1100 | 0.7511 | 0.3412 | 0.7511 | 0.8667 |
| 0.059 | 9.9279 | 1102 | 0.7509 | 0.3412 | 0.7509 | 0.8665 |
| 0.059 | 9.9459 | 1104 | 0.7506 | 0.3412 | 0.7506 | 0.8664 |
| 0.059 | 9.9640 | 1106 | 0.7504 | 0.3412 | 0.7504 | 0.8662 |
| 0.059 | 9.9820 | 1108 | 0.7499 | 0.3412 | 0.7499 | 0.8660 |
| 0.059 | 10.0 | 1110 | 0.7497 | 0.3412 | 0.7497 | 0.8658 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
furrutiav/roberta_mixtral_nllfg_rubric_cola_none_item | furrutiav | 2024-11-26T16:09:39Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-26T16:09:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
furrutiav/roberta_mixtral_nllfg_vanilla_mrpc_none_naive | furrutiav | 2024-11-26T16:06:59Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-26T16:06:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF | mradermacher | 2024-11-26T16:04:52Z | 60 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T09:26:48Z | ---
base_model: MrRobotoAI/Frigg-v1.35-8b-HIGH-FANTASY-1024k
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MrRobotoAI/Frigg-v1.35-8b-HIGH-FANTASY-1024k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Frigg-v1.35-8b-HIGH-FANTASY-1024k-GGUF/resolve/main/Frigg-v1.35-8b-HIGH-FANTASY-1024k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tedad09/DuplicateCrossEncoder-SecondTrain-20Epochs | tedad09 | 2024-11-26T16:03:49Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T16:02:51Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF | mradermacher | 2024-11-26T16:03:26Z | 152 | 1 | transformers | [
"transformers",
"gguf",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"base_model:openGPT-X/Teuken-7B-instruct-commercial-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-instruct-commercial-v0.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-26T13:51:01Z | ---
base_model: openGPT-X/Teuken-7B-instruct-commercial-v0.4
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ1_S.gguf) | i1-IQ1_S | 2.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ1_M.gguf) | i1-IQ1_M | 2.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ2_S.gguf) | i1-IQ2_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ2_M.gguf) | i1-IQ2_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.i1-Q6_K.gguf) | i1-Q6_K | 6.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kh38/my-cool-model1126 | kh38 | 2024-11-26T16:02:36Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T15:58:00Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# final_merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
* ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: linear
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 4]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.6682912881042561
- layer_range: [0, 4]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.8169238866732543
- sources:
- layer_range: [4, 8]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.824383032378547
- layer_range: [4, 8]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.585658209873855
- sources:
- layer_range: [8, 12]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.27646115516439407
- layer_range: [8, 12]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.6421834326611193
- sources:
- layer_range: [12, 16]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.19263623796034313
- layer_range: [12, 16]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.28408471038937055
- sources:
- layer_range: [16, 20]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.5452257394072989
- layer_range: [16, 20]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.5345283738895676
- sources:
- layer_range: [20, 24]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.6694408493851071
- layer_range: [20, 24]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.36909260818773304
- sources:
- layer_range: [24, 28]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.5611229475708517
- layer_range: [24, 28]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.1981276090509695
- sources:
- layer_range: [28, 32]
model: ../evol_merge_storage/input_models/Llama-3.1-Swallow-8B-v0.2_4249862252
parameters:
weight: 0.6876393865290468
- layer_range: [28, 32]
model: ../evol_merge_storage/input_models/Meta-Llama-3-8B-Instruct_fictional_mathqa_Spanish_v1_2935777833
parameters:
weight: 0.15713286244650526
```
|
mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF | mradermacher | 2024-11-26T16:00:48Z | 533 | 4 | transformers | [
"transformers",
"gguf",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"base_model:openGPT-X/Teuken-7B-instruct-commercial-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-instruct-commercial-v0.4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-26T12:49:55Z | ---
base_model: openGPT-X/Teuken-7B-instruct-commercial-v0.4
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.IQ4_XS.gguf) | IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q6_K.gguf) | Q6_K | 6.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.Q8_0.gguf) | Q8_0 | 8.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Teuken-7B-instruct-commercial-v0.4-GGUF/resolve/main/Teuken-7B-instruct-commercial-v0.4.f16.gguf) | f16 | 15.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/marco-o1-uncensored-i1-GGUF | mradermacher | 2024-11-26T16:00:07Z | 213 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:thirdeyeai/marco-o1-uncensored",
"base_model:quantized:thirdeyeai/marco-o1-uncensored",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-26T15:21:40Z | ---
base_model: thirdeyeai/marco-o1-uncensored
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/thirdeyeai/marco-o1-uncensored
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF/resolve/main/marco-o1-uncensored.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/marco-o1-uncensored-GGUF | mradermacher | 2024-11-26T15:58:25Z | 407 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:thirdeyeai/marco-o1-uncensored",
"base_model:quantized:thirdeyeai/marco-o1-uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T13:08:29Z | ---
base_model: thirdeyeai/marco-o1-uncensored
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/thirdeyeai/marco-o1-uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/marco-o1-uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/marco-o1-uncensored-GGUF/resolve/main/marco-o1-uncensored.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tokoin/Mistral-Nemo-Instruct-2407-Q8_0-GGUF | tokoin | 2024-11-26T15:57:28Z | 9 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-26T15:56:34Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
license: apache-2.0
base_model: mistralai/Mistral-Nemo-Instruct-2407
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- llama-cpp
- gguf-my-repo
---
# tokoin/Mistral-Nemo-Instruct-2407-Q8_0-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-Nemo-Instruct-2407`](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tokoin/Mistral-Nemo-Instruct-2407-Q8_0-GGUF --hf-file mistral-nemo-instruct-2407-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tokoin/Mistral-Nemo-Instruct-2407-Q8_0-GGUF --hf-file mistral-nemo-instruct-2407-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tokoin/Mistral-Nemo-Instruct-2407-Q8_0-GGUF --hf-file mistral-nemo-instruct-2407-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tokoin/Mistral-Nemo-Instruct-2407-Q8_0-GGUF --hf-file mistral-nemo-instruct-2407-q8_0.gguf -c 2048
```
|
furrutiav/roberta_mixtral_nllfg_rubric_mrpc_none_item | furrutiav | 2024-11-26T15:57:10Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-26T15:56:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/Flux-Meme-Xd-LoRA | prithivMLmods | 2024-11-26T15:55:33Z | 708 | 11 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"flux",
"meme",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-11-26T15:04:06Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- flux
- meme
widget:
- text: >-
meme, A medium-sized painting of a white T-rex in the middle of a dark,
stormy night. The t-rex is facing towards the left side of the frame, its
head turned towards the right. Its mouth is open, revealing its sharp teeth.
A rooster is standing in the foreground of the painting, with a red cap on
its head. The roosters head is turned to the right, and the word "Remember
who you are" is written in white text above it. The background is a deep
blue, with dark gray clouds and a crescent moon in the upper left corner of
the image. There are mountains in the background, and a few other animals
can be seen in the lower right corner.
output:
url: images/M1.png
- text: >-
meme, A cartoon drawing of a brown cat and a white sheep. The sheep is
facing each other and the cat is facing towards the left side of the image.
The brown cat has a black nose and a black mouth. The white sheep has a
white body and black legs. The background is a light peach color. There is a
text bubble above the brown cat that says "If you feel sad I can eat you".
output:
url: images/M3.png
- text: >-
meme, A cartoon drawing of two zebras facing each other. The zebra on the
left is facing the right. The horse on the right is facing to the left. The
zebrab is facing towards the right and has a black mane on its head. The
mane is black and white. The sky is light blue and there are birds flying in
the sky. There is a text bubble above the zebras head that says "UPGRADE
MAN!"
output:
url: images/M4.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: meme
license: creativeml-openrail-m
---
# F-Meme.FLUX.1-Dev
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/F-Meme**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 20 & 2200 |
| Epoch | 10 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 10
## Best Dimensions
- 768 x 1024 (Best)
- 1024 x 1024 (Default)
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/F-Meme"
trigger_word = "meme"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `meme` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/F-Meme/tree/main) them in the Files & versions tab. |
TIGER-Lab/MAmmoTH2-7B-Plus | TIGER-Lab | 2024-11-26T15:52:56Z | 10,670 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:TIGER-Lab/WebInstructSub",
"arxiv:2405.03548",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T08:36:03Z | ---
language:
- en
license: mit
library_name: transformers
datasets:
- TIGER-Lab/WebInstructSub
metrics:
- accuracy
model-index:
- name: MAmmoTH2-7B-Plus
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.75
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 18.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.09
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.03
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.11
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.41
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
name: Open LLM Leaderboard
---
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
| **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 |
| **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 |
| **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TIGER-Lab__MAmmoTH2-7B-Plus)
| Metric |Value|
|-------------------|----:|
|Avg. |21.22|
|IFEval (0-Shot) |55.75|
|BBH (3-Shot) |18.93|
|MATH Lvl 5 (4-Shot)|16.09|
|GPQA (0-shot) | 4.03|
|MuSR (0-shot) |10.11|
|MMLU-PRO (5-shot) |22.41|
|
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k30_task2_organization_fold0 | MayBashendy | 2024-11-26T15:50:26Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T15:34:54Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k30_task2_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k30_task2_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7899
- Qwk: 0.3986
- Mse: 0.7899
- Rmse: 0.8888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0211 | 2 | 3.5292 | 0.0019 | 3.5292 | 1.8786 |
| No log | 0.0421 | 4 | 1.8217 | 0.0197 | 1.8217 | 1.3497 |
| No log | 0.0632 | 6 | 1.1832 | 0.1431 | 1.1832 | 1.0877 |
| No log | 0.0842 | 8 | 1.2980 | 0.1431 | 1.2980 | 1.1393 |
| No log | 0.1053 | 10 | 0.8562 | 0.0704 | 0.8562 | 0.9253 |
| No log | 0.1263 | 12 | 0.8345 | 0.0269 | 0.8345 | 0.9135 |
| No log | 0.1474 | 14 | 1.0194 | 0.0455 | 1.0194 | 1.0097 |
| No log | 0.1684 | 16 | 1.3907 | 0.1394 | 1.3907 | 1.1793 |
| No log | 0.1895 | 18 | 1.7340 | 0.0550 | 1.7340 | 1.3168 |
| No log | 0.2105 | 20 | 1.5940 | 0.1646 | 1.5940 | 1.2625 |
| No log | 0.2316 | 22 | 1.3126 | 0.1495 | 1.3126 | 1.1457 |
| No log | 0.2526 | 24 | 0.9302 | 0.0594 | 0.9302 | 0.9645 |
| No log | 0.2737 | 26 | 1.1032 | 0.0455 | 1.1032 | 1.0503 |
| No log | 0.2947 | 28 | 1.2386 | 0.0162 | 1.2386 | 1.1129 |
| No log | 0.3158 | 30 | 1.1532 | 0.1567 | 1.1532 | 1.0739 |
| No log | 0.3368 | 32 | 1.0571 | 0.1567 | 1.0571 | 1.0282 |
| No log | 0.3579 | 34 | 1.0582 | 0.1567 | 1.0582 | 1.0287 |
| No log | 0.3789 | 36 | 1.1273 | 0.1567 | 1.1273 | 1.0618 |
| No log | 0.4 | 38 | 1.0432 | 0.0739 | 1.0432 | 1.0214 |
| No log | 0.4211 | 40 | 0.9063 | 0.1024 | 0.9063 | 0.9520 |
| No log | 0.4421 | 42 | 0.8738 | 0.1310 | 0.8738 | 0.9348 |
| No log | 0.4632 | 44 | 1.0145 | 0.0455 | 1.0145 | 1.0072 |
| No log | 0.4842 | 46 | 1.1076 | 0.1567 | 1.1076 | 1.0524 |
| No log | 0.5053 | 48 | 1.2979 | 0.1567 | 1.2979 | 1.1393 |
| No log | 0.5263 | 50 | 1.2604 | 0.1567 | 1.2604 | 1.1227 |
| No log | 0.5474 | 52 | 1.0365 | 0.1702 | 1.0365 | 1.0181 |
| No log | 0.5684 | 54 | 0.9577 | 0.1298 | 0.9577 | 0.9786 |
| No log | 0.5895 | 56 | 0.9500 | 0.1167 | 0.9500 | 0.9747 |
| No log | 0.6105 | 58 | 0.7974 | 0.0784 | 0.7974 | 0.8930 |
| No log | 0.6316 | 60 | 0.7566 | 0.1955 | 0.7566 | 0.8698 |
| No log | 0.6526 | 62 | 0.8915 | 0.2650 | 0.8915 | 0.9442 |
| No log | 0.6737 | 64 | 1.1709 | 0.2144 | 1.1709 | 1.0821 |
| No log | 0.6947 | 66 | 1.1165 | 0.2144 | 1.1165 | 1.0566 |
| No log | 0.7158 | 68 | 0.8324 | 0.3069 | 0.8324 | 0.9124 |
| No log | 0.7368 | 70 | 0.6919 | 0.1943 | 0.6919 | 0.8318 |
| No log | 0.7579 | 72 | 0.6379 | 0.2247 | 0.6379 | 0.7987 |
| No log | 0.7789 | 74 | 0.6715 | 0.4029 | 0.6715 | 0.8194 |
| No log | 0.8 | 76 | 0.8046 | 0.3772 | 0.8046 | 0.8970 |
| No log | 0.8211 | 78 | 1.0003 | 0.3373 | 1.0003 | 1.0001 |
| No log | 0.8421 | 80 | 1.0001 | 0.3803 | 1.0001 | 1.0000 |
| No log | 0.8632 | 82 | 0.7343 | 0.3798 | 0.7343 | 0.8569 |
| No log | 0.8842 | 84 | 0.7422 | 0.3019 | 0.7422 | 0.8615 |
| No log | 0.9053 | 86 | 0.8236 | 0.3926 | 0.8236 | 0.9075 |
| No log | 0.9263 | 88 | 0.8328 | 0.4024 | 0.8328 | 0.9126 |
| No log | 0.9474 | 90 | 0.7892 | 0.2253 | 0.7892 | 0.8883 |
| No log | 0.9684 | 92 | 0.8001 | 0.2777 | 0.8001 | 0.8945 |
| No log | 0.9895 | 94 | 0.8156 | 0.3106 | 0.8156 | 0.9031 |
| No log | 1.0105 | 96 | 0.8847 | 0.3327 | 0.8847 | 0.9406 |
| No log | 1.0316 | 98 | 0.9163 | 0.3256 | 0.9163 | 0.9572 |
| No log | 1.0526 | 100 | 0.8332 | 0.2766 | 0.8332 | 0.9128 |
| No log | 1.0737 | 102 | 0.9362 | 0.3485 | 0.9362 | 0.9676 |
| No log | 1.0947 | 104 | 0.9757 | 0.4161 | 0.9757 | 0.9878 |
| No log | 1.1158 | 106 | 0.9707 | 0.4151 | 0.9707 | 0.9852 |
| No log | 1.1368 | 108 | 0.9019 | 0.2726 | 0.9019 | 0.9497 |
| No log | 1.1579 | 110 | 0.8858 | 0.3495 | 0.8858 | 0.9412 |
| No log | 1.1789 | 112 | 0.8658 | 0.3310 | 0.8658 | 0.9305 |
| No log | 1.2 | 114 | 0.7963 | 0.2980 | 0.7963 | 0.8924 |
| No log | 1.2211 | 116 | 0.8230 | 0.2892 | 0.8230 | 0.9072 |
| No log | 1.2421 | 118 | 0.8561 | 0.2386 | 0.8561 | 0.9253 |
| No log | 1.2632 | 120 | 0.7875 | 0.3404 | 0.7875 | 0.8874 |
| No log | 1.2842 | 122 | 0.7008 | 0.2562 | 0.7008 | 0.8372 |
| No log | 1.3053 | 124 | 0.6983 | 0.4272 | 0.6983 | 0.8356 |
| No log | 1.3263 | 126 | 0.7063 | 0.3541 | 0.7063 | 0.8404 |
| No log | 1.3474 | 128 | 0.7464 | 0.3523 | 0.7464 | 0.8640 |
| No log | 1.3684 | 130 | 0.7803 | 0.3897 | 0.7803 | 0.8833 |
| No log | 1.3895 | 132 | 0.7906 | 0.3596 | 0.7905 | 0.8891 |
| No log | 1.4105 | 134 | 0.7943 | 0.3446 | 0.7943 | 0.8912 |
| No log | 1.4316 | 136 | 0.7854 | 0.3062 | 0.7854 | 0.8862 |
| No log | 1.4526 | 138 | 0.7664 | 0.3446 | 0.7664 | 0.8754 |
| No log | 1.4737 | 140 | 0.8072 | 0.3055 | 0.8072 | 0.8985 |
| No log | 1.4947 | 142 | 0.8316 | 0.3115 | 0.8316 | 0.9119 |
| No log | 1.5158 | 144 | 0.8365 | 0.3596 | 0.8365 | 0.9146 |
| No log | 1.5368 | 146 | 0.9301 | 0.3228 | 0.9301 | 0.9644 |
| No log | 1.5579 | 148 | 1.0012 | 0.2051 | 1.0012 | 1.0006 |
| No log | 1.5789 | 150 | 1.0140 | 0.2610 | 1.0140 | 1.0070 |
| No log | 1.6 | 152 | 0.8993 | 0.3533 | 0.8993 | 0.9483 |
| No log | 1.6211 | 154 | 0.7738 | 0.3270 | 0.7738 | 0.8796 |
| No log | 1.6421 | 156 | 0.7708 | 0.2359 | 0.7708 | 0.8780 |
| No log | 1.6632 | 158 | 0.7517 | 0.2668 | 0.7517 | 0.8670 |
| No log | 1.6842 | 160 | 0.6707 | 0.2382 | 0.6707 | 0.8189 |
| No log | 1.7053 | 162 | 0.6456 | 0.3958 | 0.6456 | 0.8035 |
| No log | 1.7263 | 164 | 0.6441 | 0.4055 | 0.6441 | 0.8025 |
| No log | 1.7474 | 166 | 0.6617 | 0.4077 | 0.6617 | 0.8135 |
| No log | 1.7684 | 168 | 0.6388 | 0.3181 | 0.6388 | 0.7993 |
| No log | 1.7895 | 170 | 0.6626 | 0.3627 | 0.6626 | 0.8140 |
| No log | 1.8105 | 172 | 0.7550 | 0.4294 | 0.7550 | 0.8689 |
| No log | 1.8316 | 174 | 0.8069 | 0.4175 | 0.8069 | 0.8983 |
| No log | 1.8526 | 176 | 0.9276 | 0.3720 | 0.9276 | 0.9631 |
| No log | 1.8737 | 178 | 1.1790 | 0.2296 | 1.1790 | 1.0858 |
| No log | 1.8947 | 180 | 1.0909 | 0.2651 | 1.0909 | 1.0445 |
| No log | 1.9158 | 182 | 0.8192 | 0.4294 | 0.8192 | 0.9051 |
| No log | 1.9368 | 184 | 0.7432 | 0.3355 | 0.7432 | 0.8621 |
| No log | 1.9579 | 186 | 0.7360 | 0.3494 | 0.7360 | 0.8579 |
| No log | 1.9789 | 188 | 0.7415 | 0.3126 | 0.7415 | 0.8611 |
| No log | 2.0 | 190 | 0.7696 | 0.3239 | 0.7696 | 0.8773 |
| No log | 2.0211 | 192 | 0.7171 | 0.2434 | 0.7171 | 0.8468 |
| No log | 2.0421 | 194 | 0.6720 | 0.3233 | 0.6720 | 0.8197 |
| No log | 2.0632 | 196 | 0.6565 | 0.3073 | 0.6565 | 0.8103 |
| No log | 2.0842 | 198 | 0.6613 | 0.2743 | 0.6613 | 0.8132 |
| No log | 2.1053 | 200 | 0.6568 | 0.2743 | 0.6568 | 0.8104 |
| No log | 2.1263 | 202 | 0.7036 | 0.2536 | 0.7036 | 0.8388 |
| No log | 2.1474 | 204 | 0.7071 | 0.2536 | 0.7071 | 0.8409 |
| No log | 2.1684 | 206 | 0.7027 | 0.3286 | 0.7027 | 0.8383 |
| No log | 2.1895 | 208 | 0.6746 | 0.3804 | 0.6746 | 0.8214 |
| No log | 2.2105 | 210 | 0.6759 | 0.3974 | 0.6759 | 0.8221 |
| No log | 2.2316 | 212 | 0.6882 | 0.4507 | 0.6882 | 0.8296 |
| No log | 2.2526 | 214 | 0.7176 | 0.3965 | 0.7176 | 0.8471 |
| No log | 2.2737 | 216 | 0.7001 | 0.3965 | 0.7001 | 0.8367 |
| No log | 2.2947 | 218 | 0.6448 | 0.4214 | 0.6448 | 0.8030 |
| No log | 2.3158 | 220 | 0.6269 | 0.3066 | 0.6269 | 0.7918 |
| No log | 2.3368 | 222 | 0.6549 | 0.2937 | 0.6549 | 0.8093 |
| No log | 2.3579 | 224 | 0.6620 | 0.3355 | 0.6620 | 0.8136 |
| No log | 2.3789 | 226 | 0.6866 | 0.3723 | 0.6866 | 0.8286 |
| No log | 2.4 | 228 | 0.6970 | 0.3906 | 0.6970 | 0.8349 |
| No log | 2.4211 | 230 | 0.7202 | 0.4077 | 0.7202 | 0.8486 |
| No log | 2.4421 | 232 | 0.7411 | 0.4214 | 0.7411 | 0.8609 |
| No log | 2.4632 | 234 | 0.7462 | 0.4077 | 0.7462 | 0.8638 |
| No log | 2.4842 | 236 | 0.7772 | 0.4103 | 0.7772 | 0.8816 |
| No log | 2.5053 | 238 | 0.8069 | 0.4225 | 0.8069 | 0.8983 |
| No log | 2.5263 | 240 | 0.7854 | 0.3809 | 0.7854 | 0.8863 |
| No log | 2.5474 | 242 | 0.7167 | 0.3923 | 0.7167 | 0.8466 |
| No log | 2.5684 | 244 | 0.6287 | 0.3569 | 0.6287 | 0.7929 |
| No log | 2.5895 | 246 | 0.6111 | 0.2551 | 0.6111 | 0.7818 |
| No log | 2.6105 | 248 | 0.6012 | 0.2407 | 0.6012 | 0.7754 |
| No log | 2.6316 | 250 | 0.5958 | 0.2992 | 0.5958 | 0.7719 |
| No log | 2.6526 | 252 | 0.6322 | 0.3190 | 0.6322 | 0.7951 |
| No log | 2.6737 | 254 | 0.7619 | 0.3620 | 0.7619 | 0.8729 |
| No log | 2.6947 | 256 | 0.8685 | 0.3514 | 0.8685 | 0.9319 |
| No log | 2.7158 | 258 | 0.9341 | 0.3022 | 0.9341 | 0.9665 |
| No log | 2.7368 | 260 | 0.9154 | 0.3158 | 0.9154 | 0.9567 |
| No log | 2.7579 | 262 | 0.8076 | 0.4225 | 0.8076 | 0.8987 |
| No log | 2.7789 | 264 | 0.7156 | 0.4272 | 0.7156 | 0.8460 |
| No log | 2.8 | 266 | 0.6936 | 0.4264 | 0.6936 | 0.8328 |
| No log | 2.8211 | 268 | 0.6883 | 0.4394 | 0.6883 | 0.8296 |
| No log | 2.8421 | 270 | 0.7365 | 0.4060 | 0.7365 | 0.8582 |
| No log | 2.8632 | 272 | 0.7700 | 0.4035 | 0.7700 | 0.8775 |
| No log | 2.8842 | 274 | 0.7462 | 0.4035 | 0.7462 | 0.8638 |
| No log | 2.9053 | 276 | 0.7262 | 0.3719 | 0.7262 | 0.8522 |
| No log | 2.9263 | 278 | 0.6808 | 0.4037 | 0.6808 | 0.8251 |
| No log | 2.9474 | 280 | 0.6880 | 0.3888 | 0.6880 | 0.8294 |
| No log | 2.9684 | 282 | 0.7263 | 0.3878 | 0.7263 | 0.8523 |
| No log | 2.9895 | 284 | 0.7466 | 0.3878 | 0.7466 | 0.8640 |
| No log | 3.0105 | 286 | 0.7392 | 0.4335 | 0.7392 | 0.8598 |
| No log | 3.0316 | 288 | 0.7552 | 0.4198 | 0.7552 | 0.8690 |
| No log | 3.0526 | 290 | 0.7797 | 0.4491 | 0.7797 | 0.8830 |
| No log | 3.0737 | 292 | 0.7514 | 0.4507 | 0.7514 | 0.8668 |
| No log | 3.0947 | 294 | 0.7513 | 0.4516 | 0.7513 | 0.8667 |
| No log | 3.1158 | 296 | 0.7446 | 0.4060 | 0.7446 | 0.8629 |
| No log | 3.1368 | 298 | 0.7747 | 0.4011 | 0.7747 | 0.8801 |
| No log | 3.1579 | 300 | 0.7959 | 0.3879 | 0.7959 | 0.8921 |
| No log | 3.1789 | 302 | 0.7478 | 0.3218 | 0.7478 | 0.8648 |
| No log | 3.2 | 304 | 0.6810 | 0.3685 | 0.6810 | 0.8252 |
| No log | 3.2211 | 306 | 0.6559 | 0.3737 | 0.6559 | 0.8099 |
| No log | 3.2421 | 308 | 0.6688 | 0.3915 | 0.6688 | 0.8178 |
| No log | 3.2632 | 310 | 0.7085 | 0.4199 | 0.7085 | 0.8417 |
| No log | 3.2842 | 312 | 0.7512 | 0.4334 | 0.7512 | 0.8667 |
| No log | 3.3053 | 314 | 0.7315 | 0.4334 | 0.7315 | 0.8553 |
| No log | 3.3263 | 316 | 0.6862 | 0.3456 | 0.6862 | 0.8284 |
| No log | 3.3474 | 318 | 0.6827 | 0.3787 | 0.6827 | 0.8262 |
| No log | 3.3684 | 320 | 0.6893 | 0.3932 | 0.6893 | 0.8302 |
| No log | 3.3895 | 322 | 0.7221 | 0.4491 | 0.7221 | 0.8498 |
| No log | 3.4105 | 324 | 0.7287 | 0.4491 | 0.7287 | 0.8537 |
| No log | 3.4316 | 326 | 0.7271 | 0.3923 | 0.7271 | 0.8527 |
| No log | 3.4526 | 328 | 0.6854 | 0.3523 | 0.6854 | 0.8279 |
| No log | 3.4737 | 330 | 0.6722 | 0.2937 | 0.6722 | 0.8199 |
| No log | 3.4947 | 332 | 0.6680 | 0.2382 | 0.6680 | 0.8173 |
| No log | 3.5158 | 334 | 0.6605 | 0.2382 | 0.6605 | 0.8127 |
| No log | 3.5368 | 336 | 0.6567 | 0.3195 | 0.6567 | 0.8104 |
| No log | 3.5579 | 338 | 0.6859 | 0.3569 | 0.6859 | 0.8282 |
| No log | 3.5789 | 340 | 0.7161 | 0.3404 | 0.7161 | 0.8462 |
| No log | 3.6 | 342 | 0.7088 | 0.3559 | 0.7088 | 0.8419 |
| No log | 3.6211 | 344 | 0.6973 | 0.3615 | 0.6973 | 0.8350 |
| No log | 3.6421 | 346 | 0.7283 | 0.4378 | 0.7283 | 0.8534 |
| No log | 3.6632 | 348 | 0.7431 | 0.4516 | 0.7431 | 0.8620 |
| No log | 3.6842 | 350 | 0.7584 | 0.4371 | 0.7584 | 0.8709 |
| No log | 3.7053 | 352 | 0.8260 | 0.4374 | 0.8260 | 0.9088 |
| No log | 3.7263 | 354 | 0.8574 | 0.4235 | 0.8574 | 0.9260 |
| No log | 3.7474 | 356 | 0.8080 | 0.4110 | 0.8080 | 0.8989 |
| No log | 3.7684 | 358 | 0.8063 | 0.3906 | 0.8063 | 0.8979 |
| No log | 3.7895 | 360 | 0.7548 | 0.3595 | 0.7548 | 0.8688 |
| No log | 3.8105 | 362 | 0.6700 | 0.4065 | 0.6700 | 0.8185 |
| No log | 3.8316 | 364 | 0.6530 | 0.3728 | 0.6530 | 0.8081 |
| No log | 3.8526 | 366 | 0.6630 | 0.3384 | 0.6630 | 0.8142 |
| No log | 3.8737 | 368 | 0.6695 | 0.3384 | 0.6695 | 0.8182 |
| No log | 3.8947 | 370 | 0.6794 | 0.3540 | 0.6794 | 0.8243 |
| No log | 3.9158 | 372 | 0.6822 | 0.3787 | 0.6822 | 0.8259 |
| No log | 3.9368 | 374 | 0.6809 | 0.3941 | 0.6809 | 0.8251 |
| No log | 3.9579 | 376 | 0.6932 | 0.3941 | 0.6932 | 0.8326 |
| No log | 3.9789 | 378 | 0.7142 | 0.3941 | 0.7142 | 0.8451 |
| No log | 4.0 | 380 | 0.8015 | 0.4057 | 0.8015 | 0.8953 |
| No log | 4.0211 | 382 | 0.9597 | 0.4033 | 0.9597 | 0.9796 |
| No log | 4.0421 | 384 | 1.1098 | 0.2346 | 1.1098 | 1.0535 |
| No log | 4.0632 | 386 | 1.1091 | 0.2346 | 1.1091 | 1.0531 |
| No log | 4.0842 | 388 | 0.9854 | 0.3699 | 0.9854 | 0.9927 |
| No log | 4.1053 | 390 | 0.8133 | 0.3595 | 0.8133 | 0.9018 |
| No log | 4.1263 | 392 | 0.6970 | 0.4011 | 0.6970 | 0.8348 |
| No log | 4.1474 | 394 | 0.6611 | 0.3280 | 0.6611 | 0.8131 |
| No log | 4.1684 | 396 | 0.6572 | 0.3280 | 0.6572 | 0.8107 |
| No log | 4.1895 | 398 | 0.6656 | 0.4179 | 0.6656 | 0.8158 |
| No log | 4.2105 | 400 | 0.7135 | 0.4342 | 0.7135 | 0.8447 |
| No log | 4.2316 | 402 | 0.7603 | 0.3772 | 0.7603 | 0.8720 |
| No log | 4.2526 | 404 | 0.7327 | 0.3772 | 0.7327 | 0.8560 |
| No log | 4.2737 | 406 | 0.7087 | 0.4342 | 0.7087 | 0.8418 |
| No log | 4.2947 | 408 | 0.7172 | 0.4057 | 0.7172 | 0.8468 |
| No log | 4.3158 | 410 | 0.7515 | 0.4199 | 0.7515 | 0.8669 |
| No log | 4.3368 | 412 | 0.7882 | 0.3772 | 0.7882 | 0.8878 |
| No log | 4.3579 | 414 | 0.8429 | 0.3784 | 0.8429 | 0.9181 |
| No log | 4.3789 | 416 | 0.8288 | 0.3477 | 0.8288 | 0.9104 |
| No log | 4.4 | 418 | 0.7623 | 0.3897 | 0.7623 | 0.8731 |
| No log | 4.4211 | 420 | 0.6938 | 0.4351 | 0.6938 | 0.8330 |
| No log | 4.4421 | 422 | 0.6680 | 0.3795 | 0.6680 | 0.8173 |
| No log | 4.4632 | 424 | 0.6602 | 0.4086 | 0.6602 | 0.8125 |
| No log | 4.4842 | 426 | 0.6701 | 0.3941 | 0.6701 | 0.8186 |
| No log | 4.5053 | 428 | 0.7019 | 0.4099 | 0.7019 | 0.8378 |
| No log | 4.5263 | 430 | 0.7405 | 0.4914 | 0.7405 | 0.8605 |
| No log | 4.5474 | 432 | 0.7493 | 0.4905 | 0.7493 | 0.8656 |
| No log | 4.5684 | 434 | 0.7311 | 0.5042 | 0.7311 | 0.8550 |
| No log | 4.5895 | 436 | 0.7418 | 0.5032 | 0.7418 | 0.8613 |
| No log | 4.6105 | 438 | 0.7748 | 0.4772 | 0.7748 | 0.8802 |
| No log | 4.6316 | 440 | 0.7740 | 0.4772 | 0.7740 | 0.8798 |
| No log | 4.6526 | 442 | 0.7466 | 0.5255 | 0.7466 | 0.8641 |
| No log | 4.6737 | 444 | 0.6944 | 0.5158 | 0.6944 | 0.8333 |
| No log | 4.6947 | 446 | 0.6665 | 0.4926 | 0.6665 | 0.8164 |
| No log | 4.7158 | 448 | 0.6736 | 0.5178 | 0.6736 | 0.8207 |
| No log | 4.7368 | 450 | 0.7185 | 0.5145 | 0.7185 | 0.8477 |
| No log | 4.7579 | 452 | 0.8258 | 0.5006 | 0.8258 | 0.9087 |
| No log | 4.7789 | 454 | 0.9527 | 0.3514 | 0.9527 | 0.9761 |
| No log | 4.8 | 456 | 1.0533 | 0.2813 | 1.0533 | 1.0263 |
| No log | 4.8211 | 458 | 1.0610 | 0.2965 | 1.0610 | 1.0300 |
| No log | 4.8421 | 460 | 0.9522 | 0.4243 | 0.9522 | 0.9758 |
| No log | 4.8632 | 462 | 0.8302 | 0.4265 | 0.8302 | 0.9112 |
| No log | 4.8842 | 464 | 0.7359 | 0.4629 | 0.7359 | 0.8579 |
| No log | 4.9053 | 466 | 0.6666 | 0.4762 | 0.6666 | 0.8164 |
| No log | 4.9263 | 468 | 0.6380 | 0.4483 | 0.6380 | 0.7988 |
| No log | 4.9474 | 470 | 0.6149 | 0.4335 | 0.6149 | 0.7841 |
| No log | 4.9684 | 472 | 0.6318 | 0.3897 | 0.6318 | 0.7949 |
| No log | 4.9895 | 474 | 0.6725 | 0.4049 | 0.6725 | 0.8200 |
| No log | 5.0105 | 476 | 0.7007 | 0.3906 | 0.7007 | 0.8370 |
| No log | 5.0316 | 478 | 0.7331 | 0.3906 | 0.7331 | 0.8562 |
| No log | 5.0526 | 480 | 0.7256 | 0.3906 | 0.7256 | 0.8518 |
| No log | 5.0737 | 482 | 0.6954 | 0.3751 | 0.6954 | 0.8339 |
| No log | 5.0947 | 484 | 0.6861 | 0.4626 | 0.6861 | 0.8283 |
| No log | 5.1158 | 486 | 0.7084 | 0.4907 | 0.7084 | 0.8417 |
| No log | 5.1368 | 488 | 0.7502 | 0.4900 | 0.7502 | 0.8662 |
| No log | 5.1579 | 490 | 0.8373 | 0.4623 | 0.8373 | 0.9151 |
| No log | 5.1789 | 492 | 0.8986 | 0.4163 | 0.8986 | 0.9479 |
| No log | 5.2 | 494 | 0.8956 | 0.4419 | 0.8956 | 0.9464 |
| No log | 5.2211 | 496 | 0.8676 | 0.4634 | 0.8676 | 0.9314 |
| No log | 5.2421 | 498 | 0.8281 | 0.4758 | 0.8281 | 0.9100 |
| 0.3143 | 5.2632 | 500 | 0.7725 | 0.4618 | 0.7725 | 0.8789 |
| 0.3143 | 5.2842 | 502 | 0.6922 | 0.4767 | 0.6922 | 0.8320 |
| 0.3143 | 5.3053 | 504 | 0.6458 | 0.4489 | 0.6458 | 0.8036 |
| 0.3143 | 5.3263 | 506 | 0.6442 | 0.4497 | 0.6442 | 0.8026 |
| 0.3143 | 5.3474 | 508 | 0.6536 | 0.4640 | 0.6536 | 0.8084 |
| 0.3143 | 5.3684 | 510 | 0.7077 | 0.4900 | 0.7077 | 0.8412 |
| 0.3143 | 5.3895 | 512 | 0.8221 | 0.4491 | 0.8221 | 0.9067 |
| 0.3143 | 5.4105 | 514 | 0.9266 | 0.3725 | 0.9266 | 0.9626 |
| 0.3143 | 5.4316 | 516 | 0.9654 | 0.3303 | 0.9654 | 0.9826 |
| 0.3143 | 5.4526 | 518 | 0.9311 | 0.3725 | 0.9311 | 0.9649 |
| 0.3143 | 5.4737 | 520 | 0.8257 | 0.4634 | 0.8257 | 0.9087 |
| 0.3143 | 5.4947 | 522 | 0.7362 | 0.5161 | 0.7362 | 0.8580 |
| 0.3143 | 5.5158 | 524 | 0.6908 | 0.5038 | 0.6908 | 0.8311 |
| 0.3143 | 5.5368 | 526 | 0.6834 | 0.5175 | 0.6834 | 0.8267 |
| 0.3143 | 5.5579 | 528 | 0.7058 | 0.5038 | 0.7058 | 0.8401 |
| 0.3143 | 5.5789 | 530 | 0.7263 | 0.4900 | 0.7263 | 0.8522 |
| 0.3143 | 5.6 | 532 | 0.7769 | 0.4892 | 0.7769 | 0.8814 |
| 0.3143 | 5.6211 | 534 | 0.8506 | 0.4366 | 0.8506 | 0.9223 |
| 0.3143 | 5.6421 | 536 | 0.8967 | 0.3651 | 0.8967 | 0.9470 |
| 0.3143 | 5.6632 | 538 | 0.8671 | 0.4366 | 0.8671 | 0.9312 |
| 0.3143 | 5.6842 | 540 | 0.7941 | 0.4758 | 0.7941 | 0.8911 |
| 0.3143 | 5.7053 | 542 | 0.7420 | 0.4900 | 0.7420 | 0.8614 |
| 0.3143 | 5.7263 | 544 | 0.6879 | 0.4625 | 0.6879 | 0.8294 |
| 0.3143 | 5.7474 | 546 | 0.6684 | 0.4481 | 0.6684 | 0.8175 |
| 0.3143 | 5.7684 | 548 | 0.6665 | 0.4335 | 0.6665 | 0.8164 |
| 0.3143 | 5.7895 | 550 | 0.6677 | 0.4170 | 0.6677 | 0.8171 |
| 0.3143 | 5.8105 | 552 | 0.6713 | 0.4170 | 0.6713 | 0.8193 |
| 0.3143 | 5.8316 | 554 | 0.6620 | 0.4170 | 0.6620 | 0.8136 |
| 0.3143 | 5.8526 | 556 | 0.6525 | 0.4170 | 0.6525 | 0.8078 |
| 0.3143 | 5.8737 | 558 | 0.6446 | 0.4170 | 0.6446 | 0.8028 |
| 0.3143 | 5.8947 | 560 | 0.6618 | 0.3840 | 0.6618 | 0.8135 |
| 0.3143 | 5.9158 | 562 | 0.6765 | 0.4181 | 0.6765 | 0.8225 |
| 0.3143 | 5.9368 | 564 | 0.6787 | 0.4483 | 0.6787 | 0.8238 |
| 0.3143 | 5.9579 | 566 | 0.6603 | 0.4483 | 0.6603 | 0.8126 |
| 0.3143 | 5.9789 | 568 | 0.6272 | 0.4198 | 0.6272 | 0.7920 |
| 0.3143 | 6.0 | 570 | 0.5969 | 0.4494 | 0.5969 | 0.7726 |
| 0.3143 | 6.0211 | 572 | 0.5901 | 0.4353 | 0.5901 | 0.7682 |
| 0.3143 | 6.0421 | 574 | 0.5926 | 0.4353 | 0.5926 | 0.7698 |
| 0.3143 | 6.0632 | 576 | 0.6065 | 0.4214 | 0.6065 | 0.7788 |
| 0.3143 | 6.0842 | 578 | 0.6175 | 0.4507 | 0.6175 | 0.7858 |
| 0.3143 | 6.1053 | 580 | 0.6383 | 0.4774 | 0.6383 | 0.7989 |
| 0.3143 | 6.1263 | 582 | 0.6836 | 0.4625 | 0.6836 | 0.8268 |
| 0.3143 | 6.1474 | 584 | 0.7391 | 0.4765 | 0.7391 | 0.8597 |
| 0.3143 | 6.1684 | 586 | 0.7713 | 0.4496 | 0.7713 | 0.8783 |
| 0.3143 | 6.1895 | 588 | 0.7759 | 0.4496 | 0.7759 | 0.8809 |
| 0.3143 | 6.2105 | 590 | 0.7371 | 0.4624 | 0.7371 | 0.8585 |
| 0.3143 | 6.2316 | 592 | 0.6938 | 0.4625 | 0.6938 | 0.8329 |
| 0.3143 | 6.2526 | 594 | 0.6593 | 0.4767 | 0.6593 | 0.8120 |
| 0.3143 | 6.2737 | 596 | 0.6505 | 0.4908 | 0.6505 | 0.8066 |
| 0.3143 | 6.2947 | 598 | 0.6578 | 0.4908 | 0.6578 | 0.8110 |
| 0.3143 | 6.3158 | 600 | 0.6875 | 0.4900 | 0.6875 | 0.8291 |
| 0.3143 | 6.3368 | 602 | 0.7120 | 0.4624 | 0.7120 | 0.8438 |
| 0.3143 | 6.3579 | 604 | 0.7170 | 0.4624 | 0.7170 | 0.8468 |
| 0.3143 | 6.3789 | 606 | 0.7207 | 0.4762 | 0.7207 | 0.8489 |
| 0.3143 | 6.4 | 608 | 0.7371 | 0.5027 | 0.7371 | 0.8586 |
| 0.3143 | 6.4211 | 610 | 0.7687 | 0.4891 | 0.7687 | 0.8767 |
| 0.3143 | 6.4421 | 612 | 0.7766 | 0.4891 | 0.7766 | 0.8812 |
| 0.3143 | 6.4632 | 614 | 0.7794 | 0.4891 | 0.7794 | 0.8828 |
| 0.3143 | 6.4842 | 616 | 0.7710 | 0.4891 | 0.7710 | 0.8781 |
| 0.3143 | 6.5053 | 618 | 0.7349 | 0.5295 | 0.7349 | 0.8573 |
| 0.3143 | 6.5263 | 620 | 0.7199 | 0.5295 | 0.7199 | 0.8485 |
| 0.3143 | 6.5474 | 622 | 0.6827 | 0.5295 | 0.6827 | 0.8263 |
| 0.3143 | 6.5684 | 624 | 0.6605 | 0.5038 | 0.6605 | 0.8127 |
| 0.3143 | 6.5895 | 626 | 0.6537 | 0.5038 | 0.6537 | 0.8085 |
| 0.3143 | 6.6105 | 628 | 0.6699 | 0.5038 | 0.6699 | 0.8185 |
| 0.3143 | 6.6316 | 630 | 0.6782 | 0.5161 | 0.6782 | 0.8235 |
| 0.3143 | 6.6526 | 632 | 0.7038 | 0.5027 | 0.7038 | 0.8389 |
| 0.3143 | 6.6737 | 634 | 0.7457 | 0.4892 | 0.7457 | 0.8635 |
| 0.3143 | 6.6947 | 636 | 0.7865 | 0.4759 | 0.7865 | 0.8868 |
| 0.3143 | 6.7158 | 638 | 0.8400 | 0.4650 | 0.8400 | 0.9165 |
| 0.3143 | 6.7368 | 640 | 0.8631 | 0.4650 | 0.8631 | 0.9290 |
| 0.3143 | 6.7579 | 642 | 0.8345 | 0.4764 | 0.8345 | 0.9135 |
| 0.3143 | 6.7789 | 644 | 0.7934 | 0.4635 | 0.7934 | 0.8907 |
| 0.3143 | 6.8 | 646 | 0.7347 | 0.5027 | 0.7347 | 0.8571 |
| 0.3143 | 6.8211 | 648 | 0.7004 | 0.5038 | 0.7004 | 0.8369 |
| 0.3143 | 6.8421 | 650 | 0.6751 | 0.5038 | 0.6751 | 0.8217 |
| 0.3143 | 6.8632 | 652 | 0.6594 | 0.4907 | 0.6594 | 0.8120 |
| 0.3143 | 6.8842 | 654 | 0.6657 | 0.4907 | 0.6657 | 0.8159 |
| 0.3143 | 6.9053 | 656 | 0.6870 | 0.4907 | 0.6870 | 0.8289 |
| 0.3143 | 6.9263 | 658 | 0.7139 | 0.5295 | 0.7139 | 0.8449 |
| 0.3143 | 6.9474 | 660 | 0.7142 | 0.5295 | 0.7142 | 0.8451 |
| 0.3143 | 6.9684 | 662 | 0.6960 | 0.5038 | 0.6960 | 0.8343 |
| 0.3143 | 6.9895 | 664 | 0.6783 | 0.5038 | 0.6783 | 0.8236 |
| 0.3143 | 7.0105 | 666 | 0.6690 | 0.5038 | 0.6690 | 0.8179 |
| 0.3143 | 7.0316 | 668 | 0.6584 | 0.5038 | 0.6584 | 0.8114 |
| 0.3143 | 7.0526 | 670 | 0.6530 | 0.5038 | 0.6530 | 0.8081 |
| 0.3143 | 7.0737 | 672 | 0.6606 | 0.5038 | 0.6606 | 0.8128 |
| 0.3143 | 7.0947 | 674 | 0.6664 | 0.4762 | 0.6664 | 0.8163 |
| 0.3143 | 7.1158 | 676 | 0.6645 | 0.4762 | 0.6645 | 0.8152 |
| 0.3143 | 7.1368 | 678 | 0.6545 | 0.4762 | 0.6545 | 0.8090 |
| 0.3143 | 7.1579 | 680 | 0.6378 | 0.4900 | 0.6378 | 0.7986 |
| 0.3143 | 7.1789 | 682 | 0.6168 | 0.4767 | 0.6168 | 0.7854 |
| 0.3143 | 7.2 | 684 | 0.6052 | 0.4633 | 0.6052 | 0.7780 |
| 0.3143 | 7.2211 | 686 | 0.6024 | 0.4499 | 0.6024 | 0.7761 |
| 0.3143 | 7.2421 | 688 | 0.6080 | 0.4499 | 0.6080 | 0.7797 |
| 0.3143 | 7.2632 | 690 | 0.6251 | 0.4907 | 0.6251 | 0.7906 |
| 0.3143 | 7.2842 | 692 | 0.6485 | 0.5038 | 0.6485 | 0.8053 |
| 0.3143 | 7.3053 | 694 | 0.6777 | 0.4762 | 0.6777 | 0.8232 |
| 0.3143 | 7.3263 | 696 | 0.6952 | 0.4762 | 0.6952 | 0.8338 |
| 0.3143 | 7.3474 | 698 | 0.6984 | 0.5027 | 0.6984 | 0.8357 |
| 0.3143 | 7.3684 | 700 | 0.7008 | 0.5027 | 0.7008 | 0.8372 |
| 0.3143 | 7.3895 | 702 | 0.7166 | 0.5027 | 0.7166 | 0.8465 |
| 0.3143 | 7.4105 | 704 | 0.7387 | 0.4892 | 0.7387 | 0.8595 |
| 0.3143 | 7.4316 | 706 | 0.7825 | 0.4118 | 0.7825 | 0.8846 |
| 0.3143 | 7.4526 | 708 | 0.8059 | 0.3986 | 0.8059 | 0.8977 |
| 0.3143 | 7.4737 | 710 | 0.8089 | 0.4008 | 0.8089 | 0.8994 |
| 0.3143 | 7.4947 | 712 | 0.8109 | 0.4028 | 0.8109 | 0.9005 |
| 0.3143 | 7.5158 | 714 | 0.7808 | 0.4381 | 0.7808 | 0.8836 |
| 0.3143 | 7.5368 | 716 | 0.7569 | 0.4381 | 0.7569 | 0.8700 |
| 0.3143 | 7.5579 | 718 | 0.7400 | 0.5027 | 0.7400 | 0.8602 |
| 0.3143 | 7.5789 | 720 | 0.7295 | 0.5027 | 0.7295 | 0.8541 |
| 0.3143 | 7.6 | 722 | 0.7216 | 0.5027 | 0.7216 | 0.8495 |
| 0.3143 | 7.6211 | 724 | 0.7214 | 0.4623 | 0.7214 | 0.8494 |
| 0.3143 | 7.6421 | 726 | 0.7157 | 0.4758 | 0.7157 | 0.8460 |
| 0.3143 | 7.6632 | 728 | 0.7168 | 0.4758 | 0.7168 | 0.8466 |
| 0.3143 | 7.6842 | 730 | 0.7374 | 0.4623 | 0.7374 | 0.8587 |
| 0.3143 | 7.7053 | 732 | 0.7751 | 0.3854 | 0.7751 | 0.8804 |
| 0.3143 | 7.7263 | 734 | 0.8058 | 0.3557 | 0.8058 | 0.8976 |
| 0.3143 | 7.7474 | 736 | 0.8094 | 0.3557 | 0.8094 | 0.8997 |
| 0.3143 | 7.7684 | 738 | 0.8032 | 0.3854 | 0.8032 | 0.8962 |
| 0.3143 | 7.7895 | 740 | 0.7856 | 0.3854 | 0.7856 | 0.8863 |
| 0.3143 | 7.8105 | 742 | 0.7529 | 0.4488 | 0.7529 | 0.8677 |
| 0.3143 | 7.8316 | 744 | 0.7181 | 0.4623 | 0.7181 | 0.8474 |
| 0.3143 | 7.8526 | 746 | 0.6933 | 0.4758 | 0.6933 | 0.8326 |
| 0.3143 | 7.8737 | 748 | 0.6748 | 0.4892 | 0.6748 | 0.8215 |
| 0.3143 | 7.8947 | 750 | 0.6685 | 0.5027 | 0.6685 | 0.8176 |
| 0.3143 | 7.9158 | 752 | 0.6797 | 0.4892 | 0.6797 | 0.8245 |
| 0.3143 | 7.9368 | 754 | 0.6993 | 0.4892 | 0.6993 | 0.8363 |
| 0.3143 | 7.9579 | 756 | 0.7229 | 0.4758 | 0.7229 | 0.8502 |
| 0.3143 | 7.9789 | 758 | 0.7454 | 0.4758 | 0.7454 | 0.8634 |
| 0.3143 | 8.0 | 760 | 0.7642 | 0.4623 | 0.7642 | 0.8742 |
| 0.3143 | 8.0211 | 762 | 0.7638 | 0.4758 | 0.7638 | 0.8740 |
| 0.3143 | 8.0421 | 764 | 0.7508 | 0.4758 | 0.7508 | 0.8665 |
| 0.3143 | 8.0632 | 766 | 0.7350 | 0.4758 | 0.7350 | 0.8573 |
| 0.3143 | 8.0842 | 768 | 0.7180 | 0.5027 | 0.7180 | 0.8473 |
| 0.3143 | 8.1053 | 770 | 0.7112 | 0.5027 | 0.7112 | 0.8434 |
| 0.3143 | 8.1263 | 772 | 0.7006 | 0.5161 | 0.7006 | 0.8370 |
| 0.3143 | 8.1474 | 774 | 0.6959 | 0.5161 | 0.6959 | 0.8342 |
| 0.3143 | 8.1684 | 776 | 0.7021 | 0.5161 | 0.7021 | 0.8379 |
| 0.3143 | 8.1895 | 778 | 0.7236 | 0.4512 | 0.7236 | 0.8507 |
| 0.3143 | 8.2105 | 780 | 0.7464 | 0.4381 | 0.7464 | 0.8639 |
| 0.3143 | 8.2316 | 782 | 0.7738 | 0.4265 | 0.7738 | 0.8797 |
| 0.3143 | 8.2526 | 784 | 0.7913 | 0.4008 | 0.7913 | 0.8896 |
| 0.3143 | 8.2737 | 786 | 0.7898 | 0.4008 | 0.7898 | 0.8887 |
| 0.3143 | 8.2947 | 788 | 0.7767 | 0.4008 | 0.7767 | 0.8813 |
| 0.3143 | 8.3158 | 790 | 0.7710 | 0.4008 | 0.7710 | 0.8780 |
| 0.3143 | 8.3368 | 792 | 0.7701 | 0.4008 | 0.7701 | 0.8775 |
| 0.3143 | 8.3579 | 794 | 0.7660 | 0.3986 | 0.7660 | 0.8752 |
| 0.3143 | 8.3789 | 796 | 0.7556 | 0.4118 | 0.7556 | 0.8693 |
| 0.3143 | 8.4 | 798 | 0.7514 | 0.4250 | 0.7514 | 0.8668 |
| 0.3143 | 8.4211 | 800 | 0.7526 | 0.4250 | 0.7526 | 0.8675 |
| 0.3143 | 8.4421 | 802 | 0.7615 | 0.4250 | 0.7615 | 0.8726 |
| 0.3143 | 8.4632 | 804 | 0.7655 | 0.4250 | 0.7655 | 0.8750 |
| 0.3143 | 8.4842 | 806 | 0.7668 | 0.4250 | 0.7668 | 0.8757 |
| 0.3143 | 8.5053 | 808 | 0.7736 | 0.4250 | 0.7736 | 0.8796 |
| 0.3143 | 8.5263 | 810 | 0.7796 | 0.4250 | 0.7796 | 0.8829 |
| 0.3143 | 8.5474 | 812 | 0.7863 | 0.4250 | 0.7863 | 0.8867 |
| 0.3143 | 8.5684 | 814 | 0.7827 | 0.4250 | 0.7827 | 0.8847 |
| 0.3143 | 8.5895 | 816 | 0.7815 | 0.4250 | 0.7815 | 0.8841 |
| 0.3143 | 8.6105 | 818 | 0.7797 | 0.4250 | 0.7797 | 0.8830 |
| 0.3143 | 8.6316 | 820 | 0.7755 | 0.4250 | 0.7755 | 0.8806 |
| 0.3143 | 8.6526 | 822 | 0.7778 | 0.4250 | 0.7778 | 0.8819 |
| 0.3143 | 8.6737 | 824 | 0.7787 | 0.4250 | 0.7787 | 0.8825 |
| 0.3143 | 8.6947 | 826 | 0.7837 | 0.4118 | 0.7837 | 0.8852 |
| 0.3143 | 8.7158 | 828 | 0.7810 | 0.4118 | 0.7810 | 0.8837 |
| 0.3143 | 8.7368 | 830 | 0.7752 | 0.4118 | 0.7752 | 0.8804 |
| 0.3143 | 8.7579 | 832 | 0.7736 | 0.4118 | 0.7736 | 0.8795 |
| 0.3143 | 8.7789 | 834 | 0.7682 | 0.4118 | 0.7682 | 0.8764 |
| 0.3143 | 8.8 | 836 | 0.7678 | 0.4118 | 0.7678 | 0.8762 |
| 0.3143 | 8.8211 | 838 | 0.7637 | 0.4250 | 0.7637 | 0.8739 |
| 0.3143 | 8.8421 | 840 | 0.7690 | 0.4118 | 0.7690 | 0.8769 |
| 0.3143 | 8.8632 | 842 | 0.7731 | 0.4118 | 0.7731 | 0.8792 |
| 0.3143 | 8.8842 | 844 | 0.7733 | 0.4118 | 0.7733 | 0.8794 |
| 0.3143 | 8.9053 | 846 | 0.7746 | 0.4118 | 0.7746 | 0.8801 |
| 0.3143 | 8.9263 | 848 | 0.7698 | 0.3986 | 0.7698 | 0.8774 |
| 0.3143 | 8.9474 | 850 | 0.7625 | 0.3986 | 0.7625 | 0.8732 |
| 0.3143 | 8.9684 | 852 | 0.7569 | 0.3986 | 0.7569 | 0.8700 |
| 0.3143 | 8.9895 | 854 | 0.7460 | 0.3986 | 0.7460 | 0.8637 |
| 0.3143 | 9.0105 | 856 | 0.7409 | 0.3986 | 0.7409 | 0.8608 |
| 0.3143 | 9.0316 | 858 | 0.7429 | 0.3986 | 0.7429 | 0.8619 |
| 0.3143 | 9.0526 | 860 | 0.7480 | 0.3986 | 0.7480 | 0.8649 |
| 0.3143 | 9.0737 | 862 | 0.7573 | 0.3986 | 0.7573 | 0.8702 |
| 0.3143 | 9.0947 | 864 | 0.7592 | 0.3986 | 0.7592 | 0.8713 |
| 0.3143 | 9.1158 | 866 | 0.7583 | 0.3986 | 0.7583 | 0.8708 |
| 0.3143 | 9.1368 | 868 | 0.7609 | 0.3986 | 0.7609 | 0.8723 |
| 0.3143 | 9.1579 | 870 | 0.7621 | 0.3986 | 0.7621 | 0.8730 |
| 0.3143 | 9.1789 | 872 | 0.7642 | 0.3986 | 0.7642 | 0.8742 |
| 0.3143 | 9.2 | 874 | 0.7671 | 0.3986 | 0.7671 | 0.8758 |
| 0.3143 | 9.2211 | 876 | 0.7744 | 0.3986 | 0.7744 | 0.8800 |
| 0.3143 | 9.2421 | 878 | 0.7857 | 0.3986 | 0.7857 | 0.8864 |
| 0.3143 | 9.2632 | 880 | 0.7909 | 0.3986 | 0.7909 | 0.8893 |
| 0.3143 | 9.2842 | 882 | 0.7893 | 0.3986 | 0.7893 | 0.8884 |
| 0.3143 | 9.3053 | 884 | 0.7879 | 0.3986 | 0.7879 | 0.8877 |
| 0.3143 | 9.3263 | 886 | 0.7868 | 0.3986 | 0.7868 | 0.8870 |
| 0.3143 | 9.3474 | 888 | 0.7856 | 0.3986 | 0.7856 | 0.8863 |
| 0.3143 | 9.3684 | 890 | 0.7889 | 0.3986 | 0.7889 | 0.8882 |
| 0.3143 | 9.3895 | 892 | 0.7904 | 0.3986 | 0.7904 | 0.8891 |
| 0.3143 | 9.4105 | 894 | 0.7931 | 0.3986 | 0.7931 | 0.8906 |
| 0.3143 | 9.4316 | 896 | 0.7900 | 0.3986 | 0.7900 | 0.8888 |
| 0.3143 | 9.4526 | 898 | 0.7867 | 0.3986 | 0.7867 | 0.8869 |
| 0.3143 | 9.4737 | 900 | 0.7808 | 0.3986 | 0.7808 | 0.8836 |
| 0.3143 | 9.4947 | 902 | 0.7750 | 0.3986 | 0.7750 | 0.8803 |
| 0.3143 | 9.5158 | 904 | 0.7699 | 0.3986 | 0.7699 | 0.8774 |
| 0.3143 | 9.5368 | 906 | 0.7642 | 0.3986 | 0.7642 | 0.8742 |
| 0.3143 | 9.5579 | 908 | 0.7625 | 0.3986 | 0.7625 | 0.8732 |
| 0.3143 | 9.5789 | 910 | 0.7616 | 0.3986 | 0.7616 | 0.8727 |
| 0.3143 | 9.6 | 912 | 0.7643 | 0.3986 | 0.7643 | 0.8742 |
| 0.3143 | 9.6211 | 914 | 0.7685 | 0.3986 | 0.7685 | 0.8766 |
| 0.3143 | 9.6421 | 916 | 0.7742 | 0.3986 | 0.7742 | 0.8799 |
| 0.3143 | 9.6632 | 918 | 0.7777 | 0.3986 | 0.7777 | 0.8819 |
| 0.3143 | 9.6842 | 920 | 0.7796 | 0.3986 | 0.7796 | 0.8829 |
| 0.3143 | 9.7053 | 922 | 0.7838 | 0.3986 | 0.7838 | 0.8853 |
| 0.3143 | 9.7263 | 924 | 0.7872 | 0.3986 | 0.7872 | 0.8872 |
| 0.3143 | 9.7474 | 926 | 0.7891 | 0.3986 | 0.7891 | 0.8883 |
| 0.3143 | 9.7684 | 928 | 0.7896 | 0.3986 | 0.7896 | 0.8886 |
| 0.3143 | 9.7895 | 930 | 0.7902 | 0.3986 | 0.7902 | 0.8889 |
| 0.3143 | 9.8105 | 932 | 0.7919 | 0.3986 | 0.7919 | 0.8899 |
| 0.3143 | 9.8316 | 934 | 0.7915 | 0.3986 | 0.7915 | 0.8897 |
| 0.3143 | 9.8526 | 936 | 0.7908 | 0.3986 | 0.7908 | 0.8893 |
| 0.3143 | 9.8737 | 938 | 0.7902 | 0.3986 | 0.7902 | 0.8889 |
| 0.3143 | 9.8947 | 940 | 0.7903 | 0.3986 | 0.7903 | 0.8890 |
| 0.3143 | 9.9158 | 942 | 0.7905 | 0.3986 | 0.7905 | 0.8891 |
| 0.3143 | 9.9368 | 944 | 0.7902 | 0.3986 | 0.7902 | 0.8889 |
| 0.3143 | 9.9579 | 946 | 0.7899 | 0.3986 | 0.7899 | 0.8888 |
| 0.3143 | 9.9789 | 948 | 0.7899 | 0.3986 | 0.7899 | 0.8888 |
| 0.3143 | 10.0 | 950 | 0.7899 | 0.3986 | 0.7899 | 0.8888 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
None58/my_awesome_opus_books_model2 | None58 | 2024-11-26T15:48:22Z | 36 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-26T10:08:06Z | ---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model2
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7207
- Bleu: 10.4204
- Gen Len: 14.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.0793 | 1.0 | 50000 | 1.8296 | 9.5486 | 14.9976 |
| 1.9761 | 2.0 | 100000 | 1.7207 | 10.4204 | 14.9796 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.4.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
furrutiav/roberta_mixtral_nllfg_vanilla_rte_none_naive | furrutiav | 2024-11-26T15:45:37Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-26T15:45:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xiaozaa/catvton-flux-alpha | xiaozaa | 2024-11-26T15:42:26Z | 731 | 37 | diffusers | [
"diffusers",
"safetensors",
"tryon",
"vto",
"image-to-image",
"arxiv:2407.15886",
"arxiv:2410.23775",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Fill-dev",
"license:cc-by-nc-2.0",
"region:us"
] | image-to-image | 2024-11-23T17:26:39Z | ---
library_name: diffusers
license: cc-by-nc-2.0
base_model:
- black-forest-labs/FLUX.1-Fill-dev
pipeline_tag: image-to-image
tags:
- tryon
- vto
---
# Model Card for CATVTON-Flux
CATVTON-Flux is an advanced virtual try-on solution that combines CATVTON (Contrastive Appearance and Topology Virtual Try-On) with Flux fill inpainting model for realistic and accurate clothing transfer.
## Update:
Latest Achievement (2024/11/24):
CatVton-Flux-Alpha achieved SOTA performance with FID: 5.593255043029785 on VITON-HD dataset. Test configuration: scale 30, step 30. My VITON-HD test inferencing results available [here](https://drive.google.com/file/d/1T2W5R1xH_uszGVD8p6UUAtWyx43rxGmI/view?usp=sharing)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [X/Twitter:Black Magic An](https://x.com/MrsZaaa)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [github](https://github.com/nftblackmagic/catvton-flux)
## Uses
The model is designed for virtual try-on applications, allowing users to visualize how different garments would look on a person. It can be used directly through command-line interface with the following parameters:
Input person image
Person mask
Garment image
Random seed (optional)
## How to Get Started with the Model
```
transformer = FluxTransformer2DModel.from_pretrained(
"xiaozaa/catvton-flux-alpha",
torch_dtype=torch.bfloat16
)
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
transformer=transformer,
torch_dtype=torch.bfloat16
).to("cuda")
```
## Training Details
### Training Data
VITON-HD dataset
### Training Procedure
Finetuning Flux1-dev-fill
## Evaluation
#### Metrics
FID: 5.593255043029785 (SOTA)
### Results
[More Information Needed]
#### Summary
**BibTeX:**
```
@misc{chong2024catvtonconcatenationneedvirtual,
title={CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models},
author={Zheng Chong and Xiao Dong and Haoxiang Li and Shiyue Zhang and Wenqing Zhang and Xujie Zhang and Hanqing Zhao and Xiaodan Liang},
year={2024},
eprint={2407.15886},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.15886},
}
@article{lhhuang2024iclora,
title={In-Context LoRA for Diffusion Transformers},
author={Huang, Lianghua and Wang, Wei and Wu, Zhi-Fan and Shi, Yupeng and Dou, Huanzhang and Liang, Chen and Feng, Yutong and Liu, Yu and Zhou, Jingren},
journal={arXiv preprint arxiv:2410.23775},
year={2024}
}
```
|
LakoMoor/LDAI-1.5-ANIU | LakoMoor | 2024-11-26T15:40:47Z | 98 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ru",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-11T09:40:03Z | ---
library_name: transformers
model_name: LDAI-1.5-ANIU
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
language:
- ru
license: apache-2.0
---
# LDAI-1.5-ANIU

Модель предназначена для подбора аниме под предпочтения пользователей. LDAI-1.5-ANIU используется на сайте https://aniu.lakomoor.com
## Обучение:
Для обучения модели мы собрали датасет, основанный на данных с Aniu.
В качестве генерации синтетического датасета использовались модели [Vikhr-Nemo-12B-Instruct-R-21-09-24](https://huggingface.co/Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24) и [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B), а за основу была взята модель [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
Использовался метод SFT для обучения модели.
Создание синтетического датасета и обучение модели потребовали около 120 часов работы на двух Nvidia Tesla P40 24GB.
## Пример кода для запуска:
**Рекомендуемая пареметры для генерации:**.
- temperature 0.3
- top_k 0
- top_p 1.0
***
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Загрузка модели и токенизатора
model_name = "LakoMoor/LDAI-1.5-ANIU"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Подготовка входного текста
input_text = "Семейное аниме для детей 8-12 лет"
messages = [
{"role": "system", "content": 'Вы модель "ldai-1.5-aniu", эксперт по подбору аниме на сайте Aniu. Пользователи описывают свои предпочтения, и ты рекомендуешь аниме, соответствующее их запросу. Учитывай жанры, возраст, предпочитаемый стиль и студию, а также добавляй краткое описание сюжета и полезные советы.'},
{"role": "user", "content": input_text},
]
# Токенизация и генерация текста
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
input_ids,
max_length=512,
temperature=0.3,
num_return_sequences=1,
no_repeat_ngram_size=2,
top_k=0,
top_p=1.0,
)
# Декодирование и вывод результата
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
***
#### Ответ модели:
>{
> "title_ru": "Покемон: Современное поколение - Лукарио и загадка Мью",
> "title_alt": [
> "Pokemon: Lucario and the Mystery of Mew",
> "Gekijouban Pocket Monsters Advanced Generation: Myuu to Hadou no Yuusha Lucario",
> "Pokemon Movie 8"
> ],
> "year": 2010,
> "genres": [
> "детское",
> "экшен",
> "приключения",
> "драма",
> "фэнтези"
> ],
> "studio": "OLM",
> "author": "Кэти Пилон",
> "message": "Рекомендуем посмотреть аниме 'Покемон: Современное поколение - Лукарио и загадка Мью'. Это увлекательная история о приключениях в мире покемонов. Найдите его на сайте Aniu."
>}
### Ссылки
- [LakoMoor](https://t.me/lakomoordev)
- [Aniu](https://aniu.su/) |
michaelsyao/DiffusionLM-ROCStories | michaelsyao | 2024-11-26T15:39:38Z | 365 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-11-26T15:39:06Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Ritual-Net/iris-classification | Ritual-Net | 2024-11-26T15:31:12Z | 0 | 0 | null | [
"onnx",
"license:bsd-3-clause",
"region:us"
] | null | 2024-02-06T14:29:53Z | ---
license: bsd-3-clause
---
# Iris Classification
This repository contains the generated `onnx` and `pytorch` model files for the
[Iris Classification](https://github.com/ritual-net/simple-ml-models/blob/main/iris_classification/README.md) project,
which is a simple project used in our tutorials & docs.
Refer to the [`simple-ml-models`](https://github.com/ritual-net/simple-ml-models) repository for more information. |
glif-loradex-trainer/kklors_flux_dev_translucencyV2 | glif-loradex-trainer | 2024-11-26T15:30:24Z | 59 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-26T15:29:52Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1732627287668__000003000_0.jpg
text: young woman wearing a plastic coat and a plastic hand bag, sunset in a desert
TL, translucency
- output:
url: samples/1732627329449__000003000_1.jpg
text: strong light shining through a person TL TL, translucency
- output:
url: samples/1732627371211__000003000_2.jpg
text: an apple surrounded by plastic TL, translucency
- output:
url: samples/1732627413085__000003000_3.jpg
text: beer bottle in a bar lit by candles TL, translucency
- output:
url: samples/1732627454949__000003000_4.jpg
text: close up of leafs in an autumn forest TL, translucency
- output:
url: samples/1732627496868__000003000_5.jpg
text: a person behind a curtain TL, translucency
base_model: black-forest-labs/FLUX.1-dev
trigger: TL, translucency
instance_prompt: TL, translucency
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux_dev_translucencyV2
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `kklors`.
<Gallery />
## Trigger words
You should use `TL, translucency` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/kklors_flux_dev_translucencyV2/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
ekinnk/checkpoint-AT-atamalar-10k | ekinnk | 2024-11-26T15:24:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3.1-8B-Instruct",
"region:us"
] | null | 2024-11-26T15:18:32Z | ---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Haesteining/PhiLargev2 | Haesteining | 2024-11-26T15:17:42Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T15:14:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k20_task2_organization_fold1 | MayBashendy | 2024-11-26T15:06:45Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T14:54:57Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k20_task2_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k20_task2_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6817
- Qwk: 0.4518
- Mse: 0.6817
- Rmse: 0.8256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0278 | 2 | 5.0318 | 0.0247 | 5.0318 | 2.2432 |
| No log | 0.0556 | 4 | 2.7159 | 0.0919 | 2.7159 | 1.6480 |
| No log | 0.0833 | 6 | 1.1819 | 0.1598 | 1.1819 | 1.0872 |
| No log | 0.1111 | 8 | 1.0216 | 0.1781 | 1.0216 | 1.0107 |
| No log | 0.1389 | 10 | 1.1677 | 0.2268 | 1.1677 | 1.0806 |
| No log | 0.1667 | 12 | 1.4310 | 0.2243 | 1.4310 | 1.1962 |
| No log | 0.1944 | 14 | 1.3512 | 0.2439 | 1.3512 | 1.1624 |
| No log | 0.2222 | 16 | 1.4651 | 0.0622 | 1.4651 | 1.2104 |
| No log | 0.25 | 18 | 1.4550 | -0.0439 | 1.4550 | 1.2062 |
| No log | 0.2778 | 20 | 1.2284 | 0.0655 | 1.2284 | 1.1084 |
| No log | 0.3056 | 22 | 0.9609 | 0.1054 | 0.9609 | 0.9803 |
| No log | 0.3333 | 24 | 0.9277 | 0.1273 | 0.9277 | 0.9632 |
| No log | 0.3611 | 26 | 0.9985 | 0.2259 | 0.9985 | 0.9993 |
| No log | 0.3889 | 28 | 1.0651 | 0.2564 | 1.0651 | 1.0320 |
| No log | 0.4167 | 30 | 0.9368 | 0.1480 | 0.9368 | 0.9679 |
| No log | 0.4444 | 32 | 0.9452 | 0.1480 | 0.9452 | 0.9722 |
| No log | 0.4722 | 34 | 0.9855 | 0.0813 | 0.9855 | 0.9927 |
| No log | 0.5 | 36 | 1.0278 | 0.0602 | 1.0278 | 1.0138 |
| No log | 0.5278 | 38 | 1.0600 | 0.1273 | 1.0600 | 1.0296 |
| No log | 0.5556 | 40 | 1.1979 | 0.2177 | 1.1979 | 1.0945 |
| No log | 0.5833 | 42 | 1.1255 | 0.2616 | 1.1255 | 1.0609 |
| No log | 0.6111 | 44 | 0.9829 | 0.1273 | 0.9829 | 0.9914 |
| No log | 0.6389 | 46 | 0.8107 | 0.0813 | 0.8107 | 0.9004 |
| No log | 0.6667 | 48 | 0.6785 | 0.0680 | 0.6785 | 0.8237 |
| No log | 0.6944 | 50 | 0.6142 | 0.1071 | 0.6142 | 0.7837 |
| No log | 0.7222 | 52 | 0.6322 | 0.1071 | 0.6322 | 0.7951 |
| No log | 0.75 | 54 | 0.6609 | 0.0876 | 0.6609 | 0.8129 |
| No log | 0.7778 | 56 | 0.7748 | 0.0680 | 0.7748 | 0.8802 |
| No log | 0.8056 | 58 | 0.9007 | 0.0680 | 0.9007 | 0.9491 |
| No log | 0.8333 | 60 | 0.9165 | 0.0680 | 0.9165 | 0.9573 |
| No log | 0.8611 | 62 | 0.9234 | -0.0205 | 0.9234 | 0.9610 |
| No log | 0.8889 | 64 | 0.9211 | -0.0205 | 0.9211 | 0.9598 |
| No log | 0.9167 | 66 | 0.8085 | 0.0 | 0.8085 | 0.8991 |
| No log | 0.9444 | 68 | 0.7007 | 0.0 | 0.7007 | 0.8371 |
| No log | 0.9722 | 70 | 0.6571 | 0.0079 | 0.6571 | 0.8106 |
| No log | 1.0 | 72 | 0.6591 | 0.0079 | 0.6591 | 0.8118 |
| No log | 1.0278 | 74 | 0.7075 | 0.0212 | 0.7075 | 0.8411 |
| No log | 1.0556 | 76 | 0.6918 | 0.0250 | 0.6918 | 0.8317 |
| No log | 1.0833 | 78 | 0.8344 | 0.0212 | 0.8344 | 0.9135 |
| No log | 1.1111 | 80 | 0.9747 | 0.0212 | 0.9747 | 0.9873 |
| No log | 1.1389 | 82 | 1.0593 | 0.0370 | 1.0593 | 1.0292 |
| No log | 1.1667 | 84 | 1.0998 | 0.0370 | 1.0998 | 1.0487 |
| No log | 1.1944 | 86 | 0.9826 | 0.0212 | 0.9826 | 0.9913 |
| No log | 1.2222 | 88 | 0.8927 | 0.0212 | 0.8927 | 0.9449 |
| No log | 1.25 | 90 | 0.7860 | 0.0212 | 0.7860 | 0.8866 |
| No log | 1.2778 | 92 | 0.7100 | 0.0645 | 0.7100 | 0.8426 |
| No log | 1.3056 | 94 | 0.7735 | 0.0690 | 0.7735 | 0.8795 |
| No log | 1.3333 | 96 | 0.8339 | 0.0110 | 0.8339 | 0.9132 |
| No log | 1.3611 | 98 | 0.8644 | 0.0190 | 0.8644 | 0.9297 |
| No log | 1.3889 | 100 | 0.7558 | 0.0622 | 0.7558 | 0.8694 |
| No log | 1.4167 | 102 | 0.7562 | 0.0278 | 0.7562 | 0.8696 |
| No log | 1.4444 | 104 | 0.9595 | 0.0451 | 0.9595 | 0.9796 |
| No log | 1.4722 | 106 | 1.2022 | 0.1763 | 1.2022 | 1.0965 |
| No log | 1.5 | 108 | 1.1498 | 0.1763 | 1.1498 | 1.0723 |
| No log | 1.5278 | 110 | 0.7530 | 0.0670 | 0.7530 | 0.8678 |
| No log | 1.5556 | 112 | 0.5472 | 0.4104 | 0.5472 | 0.7398 |
| No log | 1.5833 | 114 | 0.5617 | 0.4104 | 0.5617 | 0.7494 |
| No log | 1.6111 | 116 | 0.6226 | 0.3340 | 0.6226 | 0.7890 |
| No log | 1.6389 | 118 | 0.6149 | 0.3389 | 0.6149 | 0.7841 |
| No log | 1.6667 | 120 | 0.5928 | 0.3367 | 0.5928 | 0.7699 |
| No log | 1.6944 | 122 | 0.7566 | 0.1404 | 0.7566 | 0.8698 |
| No log | 1.7222 | 124 | 0.6849 | 0.2162 | 0.6849 | 0.8276 |
| No log | 1.75 | 126 | 0.7239 | 0.3645 | 0.7239 | 0.8508 |
| No log | 1.7778 | 128 | 0.9947 | 0.3209 | 0.9947 | 0.9974 |
| No log | 1.8056 | 130 | 0.8946 | 0.4104 | 0.8946 | 0.9459 |
| No log | 1.8333 | 132 | 0.7677 | 0.4651 | 0.7677 | 0.8762 |
| No log | 1.8611 | 134 | 0.6065 | 0.3908 | 0.6065 | 0.7788 |
| No log | 1.8889 | 136 | 0.5269 | 0.4355 | 0.5269 | 0.7259 |
| No log | 1.9167 | 138 | 0.5471 | 0.3830 | 0.5471 | 0.7397 |
| No log | 1.9444 | 140 | 0.5951 | 0.3273 | 0.5951 | 0.7714 |
| No log | 1.9722 | 142 | 0.6288 | 0.1629 | 0.6288 | 0.7930 |
| No log | 2.0 | 144 | 0.6829 | 0.1876 | 0.6829 | 0.8264 |
| No log | 2.0278 | 146 | 0.7299 | -0.0172 | 0.7299 | 0.8543 |
| No log | 2.0556 | 148 | 0.7222 | 0.0250 | 0.7222 | 0.8498 |
| No log | 2.0833 | 150 | 0.6708 | 0.0289 | 0.6708 | 0.8190 |
| No log | 2.1111 | 152 | 0.7018 | 0.1841 | 0.7018 | 0.8377 |
| No log | 2.1389 | 154 | 0.7427 | 0.2725 | 0.7427 | 0.8618 |
| No log | 2.1667 | 156 | 0.6506 | 0.3754 | 0.6506 | 0.8066 |
| No log | 2.1944 | 158 | 0.5553 | 0.4189 | 0.5553 | 0.7452 |
| No log | 2.2222 | 160 | 0.6133 | 0.3206 | 0.6133 | 0.7832 |
| No log | 2.25 | 162 | 0.6656 | 0.2028 | 0.6656 | 0.8159 |
| No log | 2.2778 | 164 | 0.5890 | 0.4436 | 0.5890 | 0.7675 |
| No log | 2.3056 | 166 | 0.5924 | 0.4838 | 0.5924 | 0.7697 |
| No log | 2.3333 | 168 | 0.7785 | 0.3022 | 0.7785 | 0.8823 |
| No log | 2.3611 | 170 | 0.7307 | 0.3317 | 0.7307 | 0.8548 |
| No log | 2.3889 | 172 | 0.6156 | 0.4666 | 0.6156 | 0.7846 |
| No log | 2.4167 | 174 | 0.5585 | 0.4355 | 0.5585 | 0.7473 |
| No log | 2.4444 | 176 | 0.6518 | 0.2885 | 0.6518 | 0.8074 |
| No log | 2.4722 | 178 | 0.7009 | 0.1901 | 0.7009 | 0.8372 |
| No log | 2.5 | 180 | 0.6600 | 0.2162 | 0.6600 | 0.8124 |
| No log | 2.5278 | 182 | 0.6470 | 0.3872 | 0.6470 | 0.8044 |
| No log | 2.5556 | 184 | 0.6625 | 0.2295 | 0.6625 | 0.8139 |
| No log | 2.5833 | 186 | 0.6664 | 0.2190 | 0.6664 | 0.8163 |
| No log | 2.6111 | 188 | 0.6245 | 0.3345 | 0.6245 | 0.7902 |
| No log | 2.6389 | 190 | 0.6044 | 0.3866 | 0.6044 | 0.7774 |
| No log | 2.6667 | 192 | 0.5806 | 0.4024 | 0.5806 | 0.7620 |
| No log | 2.6944 | 194 | 0.5636 | 0.4186 | 0.5636 | 0.7507 |
| No log | 2.7222 | 196 | 0.5398 | 0.3662 | 0.5398 | 0.7347 |
| No log | 2.75 | 198 | 0.5370 | 0.3851 | 0.5370 | 0.7328 |
| No log | 2.7778 | 200 | 0.5118 | 0.3297 | 0.5118 | 0.7154 |
| No log | 2.8056 | 202 | 0.5337 | 0.4447 | 0.5337 | 0.7305 |
| No log | 2.8333 | 204 | 0.6048 | 0.3280 | 0.6048 | 0.7777 |
| No log | 2.8611 | 206 | 0.5729 | 0.3611 | 0.5729 | 0.7569 |
| No log | 2.8889 | 208 | 0.5006 | 0.4376 | 0.5006 | 0.7075 |
| No log | 2.9167 | 210 | 0.5005 | 0.4387 | 0.5005 | 0.7075 |
| No log | 2.9444 | 212 | 0.5367 | 0.4673 | 0.5367 | 0.7326 |
| No log | 2.9722 | 214 | 0.6417 | 0.4502 | 0.6417 | 0.8010 |
| No log | 3.0 | 216 | 0.6578 | 0.4513 | 0.6578 | 0.8110 |
| No log | 3.0278 | 218 | 0.7698 | 0.3153 | 0.7698 | 0.8774 |
| No log | 3.0556 | 220 | 0.7486 | 0.3545 | 0.7486 | 0.8652 |
| No log | 3.0833 | 222 | 0.6301 | 0.4543 | 0.6301 | 0.7938 |
| No log | 3.1111 | 224 | 0.5375 | 0.4435 | 0.5375 | 0.7331 |
| No log | 3.1389 | 226 | 0.5406 | 0.5079 | 0.5406 | 0.7353 |
| No log | 3.1667 | 228 | 0.5464 | 0.4762 | 0.5464 | 0.7392 |
| No log | 3.1944 | 230 | 0.5686 | 0.4781 | 0.5686 | 0.7541 |
| No log | 3.2222 | 232 | 0.5800 | 0.4446 | 0.5800 | 0.7615 |
| No log | 3.25 | 234 | 0.5602 | 0.4782 | 0.5602 | 0.7485 |
| No log | 3.2778 | 236 | 0.5945 | 0.3794 | 0.5945 | 0.7710 |
| No log | 3.3056 | 238 | 0.5720 | 0.4130 | 0.5720 | 0.7563 |
| No log | 3.3333 | 240 | 0.5344 | 0.5230 | 0.5344 | 0.7310 |
| No log | 3.3611 | 242 | 0.5417 | 0.4912 | 0.5417 | 0.7360 |
| No log | 3.3889 | 244 | 0.5472 | 0.4762 | 0.5472 | 0.7397 |
| No log | 3.4167 | 246 | 0.5454 | 0.4929 | 0.5454 | 0.7385 |
| No log | 3.4444 | 248 | 0.5372 | 0.4593 | 0.5372 | 0.7330 |
| No log | 3.4722 | 250 | 0.5398 | 0.4593 | 0.5398 | 0.7347 |
| No log | 3.5 | 252 | 0.5423 | 0.4451 | 0.5423 | 0.7364 |
| No log | 3.5278 | 254 | 0.5529 | 0.5059 | 0.5529 | 0.7436 |
| No log | 3.5556 | 256 | 0.5680 | 0.4781 | 0.5680 | 0.7536 |
| No log | 3.5833 | 258 | 0.5882 | 0.3972 | 0.5882 | 0.7670 |
| No log | 3.6111 | 260 | 0.5818 | 0.3972 | 0.5818 | 0.7628 |
| No log | 3.6389 | 262 | 0.6123 | 0.3016 | 0.6123 | 0.7825 |
| No log | 3.6667 | 264 | 0.5754 | 0.3326 | 0.5754 | 0.7585 |
| No log | 3.6944 | 266 | 0.5337 | 0.4745 | 0.5337 | 0.7305 |
| No log | 3.7222 | 268 | 0.5321 | 0.4918 | 0.5321 | 0.7294 |
| No log | 3.75 | 270 | 0.5442 | 0.3794 | 0.5442 | 0.7377 |
| No log | 3.7778 | 272 | 0.5782 | 0.3280 | 0.5782 | 0.7604 |
| No log | 3.8056 | 274 | 0.5933 | 0.3066 | 0.5933 | 0.7703 |
| No log | 3.8333 | 276 | 0.5864 | 0.3066 | 0.5864 | 0.7657 |
| No log | 3.8611 | 278 | 0.5708 | 0.2848 | 0.5708 | 0.7555 |
| No log | 3.8889 | 280 | 0.5299 | 0.4047 | 0.5299 | 0.7280 |
| No log | 3.9167 | 282 | 0.5504 | 0.5336 | 0.5504 | 0.7419 |
| No log | 3.9444 | 284 | 0.5581 | 0.4841 | 0.5581 | 0.7471 |
| No log | 3.9722 | 286 | 0.5450 | 0.4398 | 0.5450 | 0.7383 |
| No log | 4.0 | 288 | 0.5583 | 0.4918 | 0.5583 | 0.7472 |
| No log | 4.0278 | 290 | 0.5711 | 0.4763 | 0.5711 | 0.7557 |
| No log | 4.0556 | 292 | 0.5729 | 0.4763 | 0.5729 | 0.7569 |
| No log | 4.0833 | 294 | 0.6016 | 0.3794 | 0.6016 | 0.7756 |
| No log | 4.1111 | 296 | 0.6344 | 0.3481 | 0.6344 | 0.7965 |
| No log | 4.1389 | 298 | 0.6972 | 0.2403 | 0.6972 | 0.8350 |
| No log | 4.1667 | 300 | 0.6786 | 0.3304 | 0.6786 | 0.8238 |
| No log | 4.1944 | 302 | 0.5920 | 0.3972 | 0.5920 | 0.7694 |
| No log | 4.2222 | 304 | 0.5307 | 0.4262 | 0.5307 | 0.7285 |
| No log | 4.25 | 306 | 0.5288 | 0.4693 | 0.5288 | 0.7272 |
| No log | 4.2778 | 308 | 0.5256 | 0.4693 | 0.5256 | 0.7250 |
| No log | 4.3056 | 310 | 0.5215 | 0.4558 | 0.5215 | 0.7221 |
| No log | 4.3333 | 312 | 0.5279 | 0.4874 | 0.5279 | 0.7266 |
| No log | 4.3611 | 314 | 0.5361 | 0.4874 | 0.5361 | 0.7322 |
| No log | 4.3889 | 316 | 0.5452 | 0.3938 | 0.5452 | 0.7383 |
| No log | 4.4167 | 318 | 0.5743 | 0.4282 | 0.5743 | 0.7578 |
| No log | 4.4444 | 320 | 0.5751 | 0.4282 | 0.5751 | 0.7583 |
| No log | 4.4722 | 322 | 0.5585 | 0.3729 | 0.5585 | 0.7474 |
| No log | 4.5 | 324 | 0.5543 | 0.3660 | 0.5543 | 0.7445 |
| No log | 4.5278 | 326 | 0.5580 | 0.4214 | 0.5580 | 0.7470 |
| No log | 4.5556 | 328 | 0.5725 | 0.4465 | 0.5725 | 0.7566 |
| No log | 4.5833 | 330 | 0.6188 | 0.3638 | 0.6188 | 0.7867 |
| No log | 4.6111 | 332 | 0.6350 | 0.3638 | 0.6350 | 0.7969 |
| No log | 4.6389 | 334 | 0.5818 | 0.4427 | 0.5818 | 0.7628 |
| No log | 4.6667 | 336 | 0.5347 | 0.3158 | 0.5347 | 0.7312 |
| No log | 4.6944 | 338 | 0.5338 | 0.3343 | 0.5338 | 0.7306 |
| No log | 4.7222 | 340 | 0.5212 | 0.3321 | 0.5212 | 0.7220 |
| No log | 4.75 | 342 | 0.5713 | 0.3540 | 0.5713 | 0.7559 |
| No log | 4.7778 | 344 | 0.6365 | 0.3548 | 0.6365 | 0.7978 |
| No log | 4.8056 | 346 | 0.6207 | 0.3548 | 0.6207 | 0.7879 |
| No log | 4.8333 | 348 | 0.5883 | 0.3256 | 0.5883 | 0.7670 |
| No log | 4.8611 | 350 | 0.5647 | 0.4301 | 0.5647 | 0.7515 |
| No log | 4.8889 | 352 | 0.6001 | 0.4184 | 0.6001 | 0.7746 |
| No log | 4.9167 | 354 | 0.6240 | 0.3817 | 0.6240 | 0.7899 |
| No log | 4.9444 | 356 | 0.6704 | 0.3494 | 0.6704 | 0.8188 |
| No log | 4.9722 | 358 | 0.7728 | 0.2278 | 0.7728 | 0.8791 |
| No log | 5.0 | 360 | 0.8865 | 0.0938 | 0.8865 | 0.9416 |
| No log | 5.0278 | 362 | 0.8750 | 0.0466 | 0.8750 | 0.9354 |
| No log | 5.0556 | 364 | 0.7943 | 0.1128 | 0.7943 | 0.8912 |
| No log | 5.0833 | 366 | 0.6795 | 0.2220 | 0.6795 | 0.8243 |
| No log | 5.1111 | 368 | 0.5886 | 0.2822 | 0.5886 | 0.7672 |
| No log | 5.1389 | 370 | 0.5599 | 0.3155 | 0.5599 | 0.7483 |
| No log | 5.1667 | 372 | 0.5563 | 0.3516 | 0.5563 | 0.7458 |
| No log | 5.1944 | 374 | 0.5695 | 0.3417 | 0.5695 | 0.7546 |
| No log | 5.2222 | 376 | 0.5917 | 0.3786 | 0.5917 | 0.7692 |
| No log | 5.25 | 378 | 0.6141 | 0.3349 | 0.6141 | 0.7836 |
| No log | 5.2778 | 380 | 0.6361 | 0.3873 | 0.6361 | 0.7975 |
| No log | 5.3056 | 382 | 0.6289 | 0.3873 | 0.6289 | 0.7930 |
| No log | 5.3333 | 384 | 0.5884 | 0.4130 | 0.5884 | 0.7671 |
| No log | 5.3611 | 386 | 0.5617 | 0.4067 | 0.5617 | 0.7494 |
| No log | 5.3889 | 388 | 0.5541 | 0.3926 | 0.5541 | 0.7443 |
| No log | 5.4167 | 390 | 0.5499 | 0.3348 | 0.5499 | 0.7416 |
| No log | 5.4444 | 392 | 0.5566 | 0.3948 | 0.5566 | 0.7460 |
| No log | 5.4722 | 394 | 0.5445 | 0.3684 | 0.5445 | 0.7379 |
| No log | 5.5 | 396 | 0.5258 | 0.3467 | 0.5258 | 0.7251 |
| No log | 5.5278 | 398 | 0.5244 | 0.3485 | 0.5244 | 0.7242 |
| No log | 5.5556 | 400 | 0.5327 | 0.3509 | 0.5327 | 0.7299 |
| No log | 5.5833 | 402 | 0.5611 | 0.3707 | 0.5611 | 0.7491 |
| No log | 5.6111 | 404 | 0.5914 | 0.3786 | 0.5914 | 0.7690 |
| No log | 5.6389 | 406 | 0.5988 | 0.3349 | 0.5988 | 0.7738 |
| No log | 5.6667 | 408 | 0.6093 | 0.3349 | 0.6093 | 0.7806 |
| No log | 5.6944 | 410 | 0.6032 | 0.3837 | 0.6032 | 0.7766 |
| No log | 5.7222 | 412 | 0.6032 | 0.4456 | 0.6032 | 0.7767 |
| No log | 5.75 | 414 | 0.5910 | 0.3345 | 0.5910 | 0.7688 |
| No log | 5.7778 | 416 | 0.5832 | 0.3104 | 0.5832 | 0.7637 |
| No log | 5.8056 | 418 | 0.5856 | 0.4236 | 0.5856 | 0.7652 |
| No log | 5.8333 | 420 | 0.5941 | 0.3440 | 0.5941 | 0.7708 |
| No log | 5.8611 | 422 | 0.6206 | 0.3280 | 0.6206 | 0.7878 |
| No log | 5.8889 | 424 | 0.6821 | 0.3327 | 0.6821 | 0.8259 |
| No log | 5.9167 | 426 | 0.7351 | 0.2199 | 0.7351 | 0.8574 |
| No log | 5.9444 | 428 | 0.7249 | 0.2199 | 0.7249 | 0.8514 |
| No log | 5.9722 | 430 | 0.6786 | 0.3510 | 0.6786 | 0.8238 |
| No log | 6.0 | 432 | 0.6698 | 0.3510 | 0.6698 | 0.8184 |
| No log | 6.0278 | 434 | 0.6783 | 0.3510 | 0.6783 | 0.8236 |
| No log | 6.0556 | 436 | 0.6617 | 0.3670 | 0.6617 | 0.8134 |
| No log | 6.0833 | 438 | 0.6239 | 0.3280 | 0.6239 | 0.7899 |
| No log | 6.1111 | 440 | 0.6122 | 0.3602 | 0.6122 | 0.7824 |
| No log | 6.1389 | 442 | 0.6201 | 0.3602 | 0.6201 | 0.7874 |
| No log | 6.1667 | 444 | 0.6662 | 0.3494 | 0.6662 | 0.8162 |
| No log | 6.1944 | 446 | 0.7120 | 0.3227 | 0.7120 | 0.8438 |
| No log | 6.2222 | 448 | 0.7941 | 0.3526 | 0.7941 | 0.8911 |
| No log | 6.25 | 450 | 0.8223 | 0.3316 | 0.8223 | 0.9068 |
| No log | 6.2778 | 452 | 0.7781 | 0.3650 | 0.7781 | 0.8821 |
| No log | 6.3056 | 454 | 0.7428 | 0.4004 | 0.7428 | 0.8618 |
| No log | 6.3333 | 456 | 0.7307 | 0.3485 | 0.7307 | 0.8548 |
| No log | 6.3611 | 458 | 0.6793 | 0.3990 | 0.6793 | 0.8242 |
| No log | 6.3889 | 460 | 0.6471 | 0.3563 | 0.6471 | 0.8044 |
| No log | 6.4167 | 462 | 0.6203 | 0.3713 | 0.6203 | 0.7876 |
| No log | 6.4444 | 464 | 0.6059 | 0.3692 | 0.6059 | 0.7784 |
| No log | 6.4722 | 466 | 0.6111 | 0.3692 | 0.6111 | 0.7817 |
| No log | 6.5 | 468 | 0.6170 | 0.3413 | 0.6170 | 0.7855 |
| No log | 6.5278 | 470 | 0.6458 | 0.3413 | 0.6458 | 0.8036 |
| No log | 6.5556 | 472 | 0.6733 | 0.4149 | 0.6733 | 0.8206 |
| No log | 6.5833 | 474 | 0.7207 | 0.3868 | 0.7207 | 0.8490 |
| No log | 6.6111 | 476 | 0.7165 | 0.3868 | 0.7165 | 0.8465 |
| No log | 6.6389 | 478 | 0.6614 | 0.3949 | 0.6614 | 0.8133 |
| No log | 6.6667 | 480 | 0.6394 | 0.3413 | 0.6394 | 0.7997 |
| No log | 6.6944 | 482 | 0.5997 | 0.3786 | 0.5997 | 0.7744 |
| No log | 6.7222 | 484 | 0.5821 | 0.4293 | 0.5821 | 0.7629 |
| No log | 6.75 | 486 | 0.5847 | 0.4293 | 0.5847 | 0.7646 |
| No log | 6.7778 | 488 | 0.5774 | 0.4293 | 0.5774 | 0.7598 |
| No log | 6.8056 | 490 | 0.5820 | 0.4293 | 0.5820 | 0.7629 |
| No log | 6.8333 | 492 | 0.5989 | 0.3786 | 0.5989 | 0.7739 |
| No log | 6.8611 | 494 | 0.6220 | 0.3783 | 0.6220 | 0.7886 |
| No log | 6.8889 | 496 | 0.6206 | 0.4240 | 0.6206 | 0.7878 |
| No log | 6.9167 | 498 | 0.6077 | 0.4699 | 0.6077 | 0.7795 |
| 0.3256 | 6.9444 | 500 | 0.5835 | 0.3844 | 0.5835 | 0.7639 |
| 0.3256 | 6.9722 | 502 | 0.5617 | 0.3390 | 0.5617 | 0.7495 |
| 0.3256 | 7.0 | 504 | 0.5599 | 0.3390 | 0.5599 | 0.7483 |
| 0.3256 | 7.0278 | 506 | 0.5658 | 0.3413 | 0.5658 | 0.7522 |
| 0.3256 | 7.0556 | 508 | 0.5839 | 0.3794 | 0.5839 | 0.7641 |
| 0.3256 | 7.0833 | 510 | 0.5893 | 0.4203 | 0.5893 | 0.7677 |
| 0.3256 | 7.1111 | 512 | 0.5887 | 0.4203 | 0.5887 | 0.7673 |
| 0.3256 | 7.1389 | 514 | 0.6032 | 0.4562 | 0.6032 | 0.7766 |
| 0.3256 | 7.1667 | 516 | 0.6163 | 0.4562 | 0.6163 | 0.7851 |
| 0.3256 | 7.1944 | 518 | 0.6237 | 0.4582 | 0.6237 | 0.7898 |
| 0.3256 | 7.2222 | 520 | 0.6171 | 0.4566 | 0.6171 | 0.7855 |
| 0.3256 | 7.25 | 522 | 0.6095 | 0.4958 | 0.6095 | 0.7807 |
| 0.3256 | 7.2778 | 524 | 0.6023 | 0.4638 | 0.6023 | 0.7761 |
| 0.3256 | 7.3056 | 526 | 0.6014 | 0.4635 | 0.6014 | 0.7755 |
| 0.3256 | 7.3333 | 528 | 0.6173 | 0.4562 | 0.6173 | 0.7857 |
| 0.3256 | 7.3611 | 530 | 0.6450 | 0.4314 | 0.6450 | 0.8031 |
| 0.3256 | 7.3889 | 532 | 0.7009 | 0.4373 | 0.7009 | 0.8372 |
| 0.3256 | 7.4167 | 534 | 0.7274 | 0.4558 | 0.7274 | 0.8529 |
| 0.3256 | 7.4444 | 536 | 0.7150 | 0.4421 | 0.7150 | 0.8456 |
| 0.3256 | 7.4722 | 538 | 0.6849 | 0.4373 | 0.6849 | 0.8276 |
| 0.3256 | 7.5 | 540 | 0.6621 | 0.4347 | 0.6621 | 0.8137 |
| 0.3256 | 7.5278 | 542 | 0.6350 | 0.4240 | 0.6350 | 0.7968 |
| 0.3256 | 7.5556 | 544 | 0.6122 | 0.4203 | 0.6122 | 0.7824 |
| 0.3256 | 7.5833 | 546 | 0.6032 | 0.3794 | 0.6032 | 0.7767 |
| 0.3256 | 7.6111 | 548 | 0.6145 | 0.3692 | 0.6145 | 0.7839 |
| 0.3256 | 7.6389 | 550 | 0.6430 | 0.3681 | 0.6430 | 0.8019 |
| 0.3256 | 7.6667 | 552 | 0.6677 | 0.3858 | 0.6677 | 0.8171 |
| 0.3256 | 7.6944 | 554 | 0.6848 | 0.3905 | 0.6848 | 0.8275 |
| 0.3256 | 7.7222 | 556 | 0.7073 | 0.3775 | 0.7073 | 0.8410 |
| 0.3256 | 7.75 | 558 | 0.7157 | 0.3775 | 0.7157 | 0.8460 |
| 0.3256 | 7.7778 | 560 | 0.7223 | 0.4231 | 0.7223 | 0.8499 |
| 0.3256 | 7.8056 | 562 | 0.7072 | 0.4363 | 0.7072 | 0.8409 |
| 0.3256 | 7.8333 | 564 | 0.6749 | 0.4347 | 0.6749 | 0.8215 |
| 0.3256 | 7.8611 | 566 | 0.6402 | 0.4717 | 0.6402 | 0.8002 |
| 0.3256 | 7.8889 | 568 | 0.6243 | 0.4528 | 0.6243 | 0.7901 |
| 0.3256 | 7.9167 | 570 | 0.6203 | 0.4528 | 0.6203 | 0.7876 |
| 0.3256 | 7.9444 | 572 | 0.6169 | 0.4668 | 0.6169 | 0.7855 |
| 0.3256 | 7.9722 | 574 | 0.6248 | 0.4550 | 0.6248 | 0.7905 |
| 0.3256 | 8.0 | 576 | 0.6367 | 0.4562 | 0.6367 | 0.7979 |
| 0.3256 | 8.0278 | 578 | 0.6599 | 0.4357 | 0.6599 | 0.8124 |
| 0.3256 | 8.0556 | 580 | 0.6834 | 0.3989 | 0.6834 | 0.8267 |
| 0.3256 | 8.0833 | 582 | 0.7312 | 0.4538 | 0.7312 | 0.8551 |
| 0.3256 | 8.1111 | 584 | 0.7624 | 0.4022 | 0.7624 | 0.8732 |
| 0.3256 | 8.1389 | 586 | 0.7617 | 0.4022 | 0.7617 | 0.8727 |
| 0.3256 | 8.1667 | 588 | 0.7468 | 0.4022 | 0.7468 | 0.8642 |
| 0.3256 | 8.1944 | 590 | 0.7074 | 0.4538 | 0.7074 | 0.8410 |
| 0.3256 | 8.2222 | 592 | 0.6710 | 0.4319 | 0.6710 | 0.8191 |
| 0.3256 | 8.25 | 594 | 0.6402 | 0.4272 | 0.6402 | 0.8001 |
| 0.3256 | 8.2778 | 596 | 0.6312 | 0.4240 | 0.6312 | 0.7945 |
| 0.3256 | 8.3056 | 598 | 0.6390 | 0.4272 | 0.6390 | 0.7994 |
| 0.3256 | 8.3333 | 600 | 0.6473 | 0.3830 | 0.6473 | 0.8045 |
| 0.3256 | 8.3611 | 602 | 0.6432 | 0.3783 | 0.6432 | 0.8020 |
| 0.3256 | 8.3889 | 604 | 0.6388 | 0.3783 | 0.6388 | 0.7993 |
| 0.3256 | 8.4167 | 606 | 0.6315 | 0.3783 | 0.6315 | 0.7946 |
| 0.3256 | 8.4444 | 608 | 0.6287 | 0.3783 | 0.6287 | 0.7929 |
| 0.3256 | 8.4722 | 610 | 0.6251 | 0.3783 | 0.6251 | 0.7906 |
| 0.3256 | 8.5 | 612 | 0.6226 | 0.4240 | 0.6226 | 0.7890 |
| 0.3256 | 8.5278 | 614 | 0.6230 | 0.3783 | 0.6230 | 0.7893 |
| 0.3256 | 8.5556 | 616 | 0.6194 | 0.4075 | 0.6194 | 0.7870 |
| 0.3256 | 8.5833 | 618 | 0.6156 | 0.3713 | 0.6156 | 0.7846 |
| 0.3256 | 8.6111 | 620 | 0.6101 | 0.3659 | 0.6101 | 0.7811 |
| 0.3256 | 8.6389 | 622 | 0.6120 | 0.3659 | 0.6120 | 0.7823 |
| 0.3256 | 8.6667 | 624 | 0.6251 | 0.3783 | 0.6251 | 0.7907 |
| 0.3256 | 8.6944 | 626 | 0.6458 | 0.3783 | 0.6458 | 0.8036 |
| 0.3256 | 8.7222 | 628 | 0.6650 | 0.3949 | 0.6650 | 0.8154 |
| 0.3256 | 8.75 | 630 | 0.6829 | 0.4028 | 0.6829 | 0.8264 |
| 0.3256 | 8.7778 | 632 | 0.6980 | 0.3892 | 0.6980 | 0.8355 |
| 0.3256 | 8.8056 | 634 | 0.7072 | 0.3970 | 0.7072 | 0.8410 |
| 0.3256 | 8.8333 | 636 | 0.7040 | 0.3970 | 0.7040 | 0.8390 |
| 0.3256 | 8.8611 | 638 | 0.6927 | 0.4064 | 0.6927 | 0.8323 |
| 0.3256 | 8.8889 | 640 | 0.6907 | 0.4064 | 0.6907 | 0.8311 |
| 0.3256 | 8.9167 | 642 | 0.6907 | 0.4064 | 0.6907 | 0.8311 |
| 0.3256 | 8.9444 | 644 | 0.6795 | 0.4028 | 0.6795 | 0.8243 |
| 0.3256 | 8.9722 | 646 | 0.6649 | 0.3873 | 0.6649 | 0.8154 |
| 0.3256 | 9.0 | 648 | 0.6560 | 0.3830 | 0.6560 | 0.8100 |
| 0.3256 | 9.0278 | 650 | 0.6588 | 0.3873 | 0.6588 | 0.8116 |
| 0.3256 | 9.0556 | 652 | 0.6658 | 0.4028 | 0.6658 | 0.8159 |
| 0.3256 | 9.0833 | 654 | 0.6755 | 0.4064 | 0.6755 | 0.8219 |
| 0.3256 | 9.1111 | 656 | 0.6776 | 0.4064 | 0.6776 | 0.8231 |
| 0.3256 | 9.1389 | 658 | 0.6726 | 0.4064 | 0.6726 | 0.8201 |
| 0.3256 | 9.1667 | 660 | 0.6707 | 0.3953 | 0.6707 | 0.8189 |
| 0.3256 | 9.1944 | 662 | 0.6723 | 0.3953 | 0.6723 | 0.8199 |
| 0.3256 | 9.2222 | 664 | 0.6792 | 0.4098 | 0.6792 | 0.8241 |
| 0.3256 | 9.25 | 666 | 0.6793 | 0.4098 | 0.6793 | 0.8242 |
| 0.3256 | 9.2778 | 668 | 0.6724 | 0.3953 | 0.6724 | 0.8200 |
| 0.3256 | 9.3056 | 670 | 0.6671 | 0.3953 | 0.6671 | 0.8168 |
| 0.3256 | 9.3333 | 672 | 0.6581 | 0.3914 | 0.6581 | 0.8112 |
| 0.3256 | 9.3611 | 674 | 0.6537 | 0.3873 | 0.6537 | 0.8085 |
| 0.3256 | 9.3889 | 676 | 0.6513 | 0.3873 | 0.6513 | 0.8071 |
| 0.3256 | 9.4167 | 678 | 0.6501 | 0.3873 | 0.6501 | 0.8063 |
| 0.3256 | 9.4444 | 680 | 0.6538 | 0.3873 | 0.6538 | 0.8086 |
| 0.3256 | 9.4722 | 682 | 0.6629 | 0.3914 | 0.6629 | 0.8142 |
| 0.3256 | 9.5 | 684 | 0.6708 | 0.4064 | 0.6708 | 0.8190 |
| 0.3256 | 9.5278 | 686 | 0.6756 | 0.4496 | 0.6756 | 0.8219 |
| 0.3256 | 9.5556 | 688 | 0.6789 | 0.4518 | 0.6789 | 0.8240 |
| 0.3256 | 9.5833 | 690 | 0.6854 | 0.4518 | 0.6854 | 0.8279 |
| 0.3256 | 9.6111 | 692 | 0.6891 | 0.4518 | 0.6891 | 0.8301 |
| 0.3256 | 9.6389 | 694 | 0.6919 | 0.4518 | 0.6919 | 0.8318 |
| 0.3256 | 9.6667 | 696 | 0.6936 | 0.4518 | 0.6936 | 0.8328 |
| 0.3256 | 9.6944 | 698 | 0.6949 | 0.4518 | 0.6949 | 0.8336 |
| 0.3256 | 9.7222 | 700 | 0.6935 | 0.4518 | 0.6935 | 0.8328 |
| 0.3256 | 9.75 | 702 | 0.6906 | 0.4518 | 0.6906 | 0.8310 |
| 0.3256 | 9.7778 | 704 | 0.6907 | 0.4518 | 0.6907 | 0.8311 |
| 0.3256 | 9.8056 | 706 | 0.6907 | 0.4518 | 0.6907 | 0.8311 |
| 0.3256 | 9.8333 | 708 | 0.6905 | 0.4518 | 0.6905 | 0.8309 |
| 0.3256 | 9.8611 | 710 | 0.6889 | 0.4518 | 0.6889 | 0.8300 |
| 0.3256 | 9.8889 | 712 | 0.6860 | 0.4518 | 0.6860 | 0.8283 |
| 0.3256 | 9.9167 | 714 | 0.6834 | 0.4518 | 0.6834 | 0.8267 |
| 0.3256 | 9.9444 | 716 | 0.6824 | 0.4518 | 0.6824 | 0.8261 |
| 0.3256 | 9.9722 | 718 | 0.6819 | 0.4518 | 0.6819 | 0.8258 |
| 0.3256 | 10.0 | 720 | 0.6817 | 0.4518 | 0.6817 | 0.8256 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Devishri1/test_bert2 | Devishri1 | 2024-11-26T14:56:58Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-11-26T14:56:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k20_task2_organization_fold0 | MayBashendy | 2024-11-26T14:54:55Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T14:44:17Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k20_task2_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k20_task2_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7196
- Qwk: 0.3647
- Mse: 0.7196
- Rmse: 0.8483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0312 | 2 | 3.5453 | -0.0165 | 3.5453 | 1.8829 |
| No log | 0.0625 | 4 | 2.0293 | -0.0257 | 2.0293 | 1.4245 |
| No log | 0.0938 | 6 | 1.5598 | 0.0407 | 1.5598 | 1.2489 |
| No log | 0.125 | 8 | 1.8322 | 0.0031 | 1.8322 | 1.3536 |
| No log | 0.1562 | 10 | 1.6017 | 0.0399 | 1.6017 | 1.2656 |
| No log | 0.1875 | 12 | 1.3022 | 0.0883 | 1.3022 | 1.1412 |
| No log | 0.2188 | 14 | 1.1222 | 0.0455 | 1.1222 | 1.0594 |
| No log | 0.25 | 16 | 1.0853 | 0.0455 | 1.0853 | 1.0418 |
| No log | 0.2812 | 18 | 1.2283 | 0.0028 | 1.2283 | 1.1083 |
| No log | 0.3125 | 20 | 1.5649 | 0.0514 | 1.5649 | 1.2509 |
| No log | 0.3438 | 22 | 1.5049 | 0.0178 | 1.5049 | 1.2267 |
| No log | 0.375 | 24 | 1.4718 | -0.0478 | 1.4718 | 1.2132 |
| No log | 0.4062 | 26 | 1.4872 | -0.0333 | 1.4872 | 1.2195 |
| No log | 0.4375 | 28 | 1.2807 | 0.0028 | 1.2807 | 1.1317 |
| No log | 0.4688 | 30 | 1.1286 | 0.0724 | 1.1286 | 1.0624 |
| No log | 0.5 | 32 | 1.1155 | 0.1172 | 1.1155 | 1.0562 |
| No log | 0.5312 | 34 | 1.0507 | 0.1172 | 1.0507 | 1.0251 |
| No log | 0.5625 | 36 | 1.0209 | 0.1480 | 1.0209 | 1.0104 |
| No log | 0.5938 | 38 | 1.0974 | 0.0725 | 1.0974 | 1.0476 |
| No log | 0.625 | 40 | 1.2171 | 0.0725 | 1.2171 | 1.1032 |
| No log | 0.6562 | 42 | 1.2582 | 0.1449 | 1.2582 | 1.1217 |
| No log | 0.6875 | 44 | 1.2359 | 0.1311 | 1.2359 | 1.1117 |
| No log | 0.7188 | 46 | 1.1910 | 0.1311 | 1.1910 | 1.0913 |
| No log | 0.75 | 48 | 1.0772 | 0.1311 | 1.0772 | 1.0379 |
| No log | 0.7812 | 50 | 0.9740 | 0.0896 | 0.9740 | 0.9869 |
| No log | 0.8125 | 52 | 0.9995 | 0.1172 | 0.9995 | 0.9998 |
| No log | 0.8438 | 54 | 1.0793 | 0.1567 | 1.0793 | 1.0389 |
| No log | 0.875 | 56 | 1.2858 | 0.0964 | 1.2858 | 1.1339 |
| No log | 0.9062 | 58 | 1.3258 | 0.0964 | 1.3258 | 1.1514 |
| No log | 0.9375 | 60 | 1.1654 | 0.1567 | 1.1654 | 1.0795 |
| No log | 0.9688 | 62 | 1.0623 | 0.1172 | 1.0623 | 1.0307 |
| No log | 1.0 | 64 | 0.8692 | 0.0739 | 0.8692 | 0.9323 |
| No log | 1.0312 | 66 | 0.7661 | 0.0367 | 0.7661 | 0.8753 |
| No log | 1.0625 | 68 | 0.8066 | 0.1148 | 0.8066 | 0.8981 |
| No log | 1.0938 | 70 | 0.9341 | 0.1284 | 0.9341 | 0.9665 |
| No log | 1.125 | 72 | 1.1841 | 0.0953 | 1.1841 | 1.0882 |
| No log | 1.1562 | 74 | 1.0123 | 0.0954 | 1.0123 | 1.0061 |
| No log | 1.1875 | 76 | 0.8317 | 0.1485 | 0.8317 | 0.9120 |
| No log | 1.2188 | 78 | 0.8514 | 0.2353 | 0.8514 | 0.9227 |
| No log | 1.25 | 80 | 0.9989 | 0.1298 | 0.9989 | 0.9995 |
| No log | 1.2812 | 82 | 1.0122 | 0.1480 | 1.0122 | 1.0061 |
| No log | 1.3125 | 84 | 0.8471 | 0.2563 | 0.8471 | 0.9204 |
| No log | 1.3438 | 86 | 0.8204 | 0.3243 | 0.8204 | 0.9058 |
| No log | 1.375 | 88 | 0.8882 | 0.2209 | 0.8882 | 0.9424 |
| No log | 1.4062 | 90 | 0.8860 | 0.2209 | 0.8860 | 0.9413 |
| No log | 1.4375 | 92 | 0.9287 | 0.1766 | 0.9287 | 0.9637 |
| No log | 1.4688 | 94 | 0.8537 | 0.3098 | 0.8537 | 0.9239 |
| No log | 1.5 | 96 | 0.7246 | 0.2899 | 0.7246 | 0.8512 |
| No log | 1.5312 | 98 | 0.6786 | 0.0899 | 0.6786 | 0.8238 |
| No log | 1.5625 | 100 | 0.7207 | 0.1349 | 0.7207 | 0.8489 |
| No log | 1.5938 | 102 | 0.7016 | 0.1499 | 0.7016 | 0.8376 |
| No log | 1.625 | 104 | 0.6794 | 0.1499 | 0.6794 | 0.8242 |
| No log | 1.6562 | 106 | 0.6559 | 0.2969 | 0.6559 | 0.8099 |
| No log | 1.6875 | 108 | 0.7236 | 0.2271 | 0.7236 | 0.8506 |
| No log | 1.7188 | 110 | 0.7992 | 0.2802 | 0.7992 | 0.8940 |
| No log | 1.75 | 112 | 0.7401 | 0.2259 | 0.7401 | 0.8603 |
| No log | 1.7812 | 114 | 0.7283 | 0.2101 | 0.7283 | 0.8534 |
| No log | 1.8125 | 116 | 0.6936 | 0.1307 | 0.6936 | 0.8328 |
| No log | 1.8438 | 118 | 0.7124 | 0.2150 | 0.7124 | 0.8440 |
| No log | 1.875 | 120 | 0.7445 | 0.1746 | 0.7445 | 0.8628 |
| No log | 1.9062 | 122 | 0.9366 | 0.2116 | 0.9366 | 0.9678 |
| No log | 1.9375 | 124 | 1.1293 | 0.1453 | 1.1293 | 1.0627 |
| No log | 1.9688 | 126 | 0.9528 | 0.2005 | 0.9528 | 0.9761 |
| No log | 2.0 | 128 | 0.8721 | 0.2069 | 0.8721 | 0.9339 |
| No log | 2.0312 | 130 | 0.8055 | 0.2575 | 0.8055 | 0.8975 |
| No log | 2.0625 | 132 | 0.7419 | 0.1486 | 0.7419 | 0.8613 |
| No log | 2.0938 | 134 | 0.7256 | 0.0811 | 0.7256 | 0.8518 |
| No log | 2.125 | 136 | 0.7547 | 0.2247 | 0.7547 | 0.8687 |
| No log | 2.1562 | 138 | 0.8620 | 0.2965 | 0.8620 | 0.9285 |
| No log | 2.1875 | 140 | 0.9177 | 0.2359 | 0.9177 | 0.9580 |
| No log | 2.2188 | 142 | 0.8548 | 0.2392 | 0.8548 | 0.9245 |
| No log | 2.25 | 144 | 0.7304 | 0.1664 | 0.7304 | 0.8546 |
| No log | 2.2812 | 146 | 0.6850 | 0.2481 | 0.6850 | 0.8277 |
| No log | 2.3125 | 148 | 0.7060 | 0.2276 | 0.7060 | 0.8402 |
| No log | 2.3438 | 150 | 0.7956 | 0.3568 | 0.7956 | 0.8919 |
| No log | 2.375 | 152 | 0.7717 | 0.3433 | 0.7717 | 0.8785 |
| No log | 2.4062 | 154 | 0.6577 | 0.3668 | 0.6577 | 0.8110 |
| No log | 2.4375 | 156 | 0.6620 | 0.2499 | 0.6620 | 0.8136 |
| No log | 2.4688 | 158 | 0.6348 | 0.2499 | 0.6348 | 0.7968 |
| No log | 2.5 | 160 | 0.6486 | 0.2782 | 0.6486 | 0.8054 |
| No log | 2.5312 | 162 | 0.6570 | 0.2723 | 0.6570 | 0.8106 |
| No log | 2.5625 | 164 | 0.7060 | 0.3134 | 0.7060 | 0.8403 |
| No log | 2.5938 | 166 | 0.6881 | 0.1668 | 0.6881 | 0.8295 |
| No log | 2.625 | 168 | 0.7263 | 0.3533 | 0.7263 | 0.8522 |
| No log | 2.6562 | 170 | 0.8086 | 0.3323 | 0.8086 | 0.8992 |
| No log | 2.6875 | 172 | 0.7590 | 0.3460 | 0.7590 | 0.8712 |
| No log | 2.7188 | 174 | 0.6778 | 0.3695 | 0.6778 | 0.8233 |
| No log | 2.75 | 176 | 0.6960 | 0.2205 | 0.6960 | 0.8343 |
| No log | 2.7812 | 178 | 0.7578 | 0.3182 | 0.7578 | 0.8705 |
| No log | 2.8125 | 180 | 0.7725 | 0.2888 | 0.7725 | 0.8789 |
| No log | 2.8438 | 182 | 0.6894 | 0.2494 | 0.6894 | 0.8303 |
| No log | 2.875 | 184 | 0.6530 | 0.2268 | 0.6530 | 0.8081 |
| No log | 2.9062 | 186 | 0.6841 | 0.3284 | 0.6841 | 0.8271 |
| No log | 2.9375 | 188 | 0.6793 | 0.3884 | 0.6793 | 0.8242 |
| No log | 2.9688 | 190 | 0.6841 | 0.2895 | 0.6841 | 0.8271 |
| No log | 3.0 | 192 | 0.7703 | 0.2872 | 0.7703 | 0.8777 |
| No log | 3.0312 | 194 | 0.7606 | 0.3045 | 0.7606 | 0.8721 |
| No log | 3.0625 | 196 | 0.6991 | 0.3966 | 0.6991 | 0.8361 |
| No log | 3.0938 | 198 | 0.6797 | 0.3893 | 0.6797 | 0.8244 |
| No log | 3.125 | 200 | 0.7138 | 0.4386 | 0.7138 | 0.8449 |
| No log | 3.1562 | 202 | 0.7177 | 0.4267 | 0.7177 | 0.8472 |
| No log | 3.1875 | 204 | 0.6928 | 0.3150 | 0.6928 | 0.8324 |
| No log | 3.2188 | 206 | 0.7170 | 0.3949 | 0.7170 | 0.8468 |
| No log | 3.25 | 208 | 0.7495 | 0.3380 | 0.7495 | 0.8657 |
| No log | 3.2812 | 210 | 0.7892 | 0.2787 | 0.7892 | 0.8884 |
| No log | 3.3125 | 212 | 0.8128 | 0.3210 | 0.8128 | 0.9016 |
| No log | 3.3438 | 214 | 0.7231 | 0.3679 | 0.7231 | 0.8504 |
| No log | 3.375 | 216 | 0.6323 | 0.4143 | 0.6323 | 0.7952 |
| No log | 3.4062 | 218 | 0.6495 | 0.3621 | 0.6495 | 0.8059 |
| No log | 3.4375 | 220 | 0.6676 | 0.3490 | 0.6676 | 0.8171 |
| No log | 3.4688 | 222 | 0.6435 | 0.3313 | 0.6435 | 0.8022 |
| No log | 3.5 | 224 | 0.6527 | 0.3596 | 0.6527 | 0.8079 |
| No log | 3.5312 | 226 | 0.6878 | 0.3436 | 0.6878 | 0.8293 |
| No log | 3.5625 | 228 | 0.7457 | 0.2973 | 0.7457 | 0.8635 |
| No log | 3.5938 | 230 | 0.7376 | 0.2917 | 0.7376 | 0.8588 |
| No log | 3.625 | 232 | 0.6559 | 0.2783 | 0.6559 | 0.8099 |
| No log | 3.6562 | 234 | 0.6153 | 0.3315 | 0.6153 | 0.7844 |
| No log | 3.6875 | 236 | 0.6115 | 0.3541 | 0.6115 | 0.7820 |
| No log | 3.7188 | 238 | 0.6173 | 0.3949 | 0.6173 | 0.7857 |
| No log | 3.75 | 240 | 0.6278 | 0.4077 | 0.6278 | 0.7924 |
| No log | 3.7812 | 242 | 0.6521 | 0.3888 | 0.6521 | 0.8075 |
| No log | 3.8125 | 244 | 0.6432 | 0.3897 | 0.6432 | 0.8020 |
| No log | 3.8438 | 246 | 0.6115 | 0.3756 | 0.6115 | 0.7820 |
| No log | 3.875 | 248 | 0.6107 | 0.3384 | 0.6107 | 0.7815 |
| No log | 3.9062 | 250 | 0.6113 | 0.3443 | 0.6113 | 0.7819 |
| No log | 3.9375 | 252 | 0.5989 | 0.3340 | 0.5989 | 0.7739 |
| No log | 3.9688 | 254 | 0.5851 | 0.2916 | 0.5851 | 0.7649 |
| No log | 4.0 | 256 | 0.5935 | 0.3756 | 0.5935 | 0.7704 |
| No log | 4.0312 | 258 | 0.6091 | 0.3756 | 0.6091 | 0.7805 |
| No log | 4.0625 | 260 | 0.6209 | 0.3941 | 0.6209 | 0.7880 |
| No log | 4.0938 | 262 | 0.6393 | 0.4069 | 0.6393 | 0.7995 |
| No log | 4.125 | 264 | 0.6342 | 0.3795 | 0.6342 | 0.7964 |
| No log | 4.1562 | 266 | 0.6299 | 0.4069 | 0.6299 | 0.7937 |
| No log | 4.1875 | 268 | 0.6474 | 0.2928 | 0.6474 | 0.8046 |
| No log | 4.2188 | 270 | 0.6895 | 0.3266 | 0.6895 | 0.8303 |
| No log | 4.25 | 272 | 0.7118 | 0.3392 | 0.7118 | 0.8437 |
| No log | 4.2812 | 274 | 0.7028 | 0.4234 | 0.7028 | 0.8383 |
| No log | 4.3125 | 276 | 0.6800 | 0.4225 | 0.6800 | 0.8246 |
| No log | 4.3438 | 278 | 0.6621 | 0.4208 | 0.6621 | 0.8137 |
| No log | 4.375 | 280 | 0.6512 | 0.4371 | 0.6512 | 0.8069 |
| No log | 4.4062 | 282 | 0.6610 | 0.4509 | 0.6610 | 0.8130 |
| No log | 4.4375 | 284 | 0.6991 | 0.4774 | 0.6991 | 0.8361 |
| No log | 4.4688 | 286 | 0.7171 | 0.4118 | 0.7171 | 0.8468 |
| No log | 4.5 | 288 | 0.7481 | 0.4008 | 0.7481 | 0.8650 |
| No log | 4.5312 | 290 | 0.7791 | 0.3541 | 0.7791 | 0.8827 |
| No log | 4.5625 | 292 | 0.7974 | 0.3044 | 0.7974 | 0.8930 |
| No log | 4.5938 | 294 | 0.7658 | 0.3218 | 0.7658 | 0.8751 |
| No log | 4.625 | 296 | 0.7027 | 0.3276 | 0.7027 | 0.8382 |
| No log | 4.6562 | 298 | 0.6851 | 0.4043 | 0.6851 | 0.8277 |
| No log | 4.6875 | 300 | 0.6449 | 0.4189 | 0.6449 | 0.8030 |
| No log | 4.7188 | 302 | 0.6134 | 0.4198 | 0.6134 | 0.7832 |
| No log | 4.75 | 304 | 0.6132 | 0.4335 | 0.6132 | 0.7831 |
| No log | 4.7812 | 306 | 0.6306 | 0.4189 | 0.6306 | 0.7941 |
| No log | 4.8125 | 308 | 0.6231 | 0.4335 | 0.6231 | 0.7894 |
| No log | 4.8438 | 310 | 0.5969 | 0.4248 | 0.5969 | 0.7726 |
| No log | 4.875 | 312 | 0.5851 | 0.4817 | 0.5851 | 0.7650 |
| No log | 4.9062 | 314 | 0.5915 | 0.4943 | 0.5915 | 0.7691 |
| No log | 4.9375 | 316 | 0.6346 | 0.4625 | 0.6346 | 0.7966 |
| No log | 4.9688 | 318 | 0.7008 | 0.3809 | 0.7008 | 0.8372 |
| No log | 5.0 | 320 | 0.6866 | 0.3809 | 0.6866 | 0.8286 |
| No log | 5.0312 | 322 | 0.6753 | 0.3948 | 0.6753 | 0.8218 |
| No log | 5.0625 | 324 | 0.6525 | 0.4216 | 0.6525 | 0.8078 |
| No log | 5.0938 | 326 | 0.6558 | 0.4371 | 0.6558 | 0.8098 |
| No log | 5.125 | 328 | 0.6554 | 0.4371 | 0.6554 | 0.8096 |
| No log | 5.1562 | 330 | 0.6502 | 0.4499 | 0.6502 | 0.8064 |
| No log | 5.1875 | 332 | 0.6377 | 0.4365 | 0.6377 | 0.7985 |
| No log | 5.2188 | 334 | 0.6478 | 0.3769 | 0.6478 | 0.8049 |
| No log | 5.25 | 336 | 0.6754 | 0.3613 | 0.6754 | 0.8218 |
| No log | 5.2812 | 338 | 0.6686 | 0.3613 | 0.6686 | 0.8176 |
| No log | 5.3125 | 340 | 0.6221 | 0.3456 | 0.6221 | 0.7887 |
| No log | 5.3438 | 342 | 0.6023 | 0.3932 | 0.6023 | 0.7761 |
| No log | 5.375 | 344 | 0.5977 | 0.3465 | 0.5977 | 0.7731 |
| No log | 5.4062 | 346 | 0.5976 | 0.3035 | 0.5976 | 0.7730 |
| No log | 5.4375 | 348 | 0.6057 | 0.3596 | 0.6057 | 0.7783 |
| No log | 5.4688 | 350 | 0.6112 | 0.3446 | 0.6112 | 0.7818 |
| No log | 5.5 | 352 | 0.6580 | 0.3427 | 0.6580 | 0.8112 |
| No log | 5.5312 | 354 | 0.7280 | 0.3218 | 0.7280 | 0.8532 |
| No log | 5.5625 | 356 | 0.7759 | 0.3758 | 0.7759 | 0.8808 |
| No log | 5.5938 | 358 | 0.7364 | 0.4161 | 0.7364 | 0.8581 |
| No log | 5.625 | 360 | 0.7004 | 0.4126 | 0.7004 | 0.8369 |
| No log | 5.6562 | 362 | 0.6557 | 0.4216 | 0.6557 | 0.8098 |
| No log | 5.6875 | 364 | 0.6232 | 0.3949 | 0.6232 | 0.7895 |
| No log | 5.7188 | 366 | 0.6143 | 0.4232 | 0.6143 | 0.7838 |
| No log | 5.75 | 368 | 0.6380 | 0.3427 | 0.6380 | 0.7988 |
| No log | 5.7812 | 370 | 0.6630 | 0.3427 | 0.6630 | 0.8143 |
| No log | 5.8125 | 372 | 0.6502 | 0.3427 | 0.6502 | 0.8064 |
| No log | 5.8438 | 374 | 0.6368 | 0.3878 | 0.6368 | 0.7980 |
| No log | 5.875 | 376 | 0.6499 | 0.3737 | 0.6499 | 0.8061 |
| No log | 5.9062 | 378 | 0.6483 | 0.4060 | 0.6483 | 0.8052 |
| No log | 5.9375 | 380 | 0.6650 | 0.4491 | 0.6650 | 0.8155 |
| No log | 5.9688 | 382 | 0.6620 | 0.4198 | 0.6620 | 0.8137 |
| No log | 6.0 | 384 | 0.6681 | 0.4060 | 0.6681 | 0.8174 |
| No log | 6.0312 | 386 | 0.6765 | 0.3446 | 0.6765 | 0.8225 |
| No log | 6.0625 | 388 | 0.6855 | 0.3286 | 0.6855 | 0.8280 |
| No log | 6.0938 | 390 | 0.7016 | 0.3613 | 0.7016 | 0.8376 |
| No log | 6.125 | 392 | 0.7080 | 0.4208 | 0.7080 | 0.8415 |
| No log | 6.1562 | 394 | 0.6886 | 0.4357 | 0.6886 | 0.8298 |
| No log | 6.1875 | 396 | 0.6754 | 0.3513 | 0.6754 | 0.8218 |
| No log | 6.2188 | 398 | 0.6737 | 0.3824 | 0.6737 | 0.8208 |
| No log | 6.25 | 400 | 0.6786 | 0.4357 | 0.6786 | 0.8238 |
| No log | 6.2812 | 402 | 0.7040 | 0.3897 | 0.7040 | 0.8391 |
| No log | 6.3125 | 404 | 0.7171 | 0.3751 | 0.7171 | 0.8468 |
| No log | 6.3438 | 406 | 0.7028 | 0.3751 | 0.7028 | 0.8383 |
| No log | 6.375 | 408 | 0.6730 | 0.3622 | 0.6730 | 0.8204 |
| No log | 6.4062 | 410 | 0.6594 | 0.4099 | 0.6594 | 0.8121 |
| No log | 6.4375 | 412 | 0.6544 | 0.3833 | 0.6544 | 0.8090 |
| No log | 6.4688 | 414 | 0.6629 | 0.3682 | 0.6629 | 0.8142 |
| No log | 6.5 | 416 | 0.6989 | 0.3410 | 0.6989 | 0.8360 |
| No log | 6.5312 | 418 | 0.7797 | 0.3792 | 0.7797 | 0.8830 |
| No log | 6.5625 | 420 | 0.8502 | 0.3394 | 0.8502 | 0.9221 |
| No log | 6.5938 | 422 | 0.8751 | 0.3649 | 0.8751 | 0.9354 |
| No log | 6.625 | 424 | 0.8497 | 0.3739 | 0.8497 | 0.9218 |
| No log | 6.6562 | 426 | 0.8161 | 0.3636 | 0.8161 | 0.9034 |
| No log | 6.6875 | 428 | 0.7765 | 0.3341 | 0.7765 | 0.8812 |
| No log | 6.7188 | 430 | 0.7467 | 0.3792 | 0.7467 | 0.8641 |
| No log | 6.75 | 432 | 0.7299 | 0.3766 | 0.7299 | 0.8543 |
| No log | 6.7812 | 434 | 0.7083 | 0.3068 | 0.7083 | 0.8416 |
| No log | 6.8125 | 436 | 0.7030 | 0.3212 | 0.7030 | 0.8384 |
| No log | 6.8438 | 438 | 0.7086 | 0.3075 | 0.7086 | 0.8418 |
| No log | 6.875 | 440 | 0.7160 | 0.2803 | 0.7160 | 0.8462 |
| No log | 6.9062 | 442 | 0.7234 | 0.2939 | 0.7234 | 0.8506 |
| No log | 6.9375 | 444 | 0.7417 | 0.2942 | 0.7417 | 0.8612 |
| No log | 6.9688 | 446 | 0.7720 | 0.3516 | 0.7720 | 0.8786 |
| No log | 7.0 | 448 | 0.7831 | 0.3062 | 0.7831 | 0.8850 |
| No log | 7.0312 | 450 | 0.8212 | 0.3456 | 0.8212 | 0.9062 |
| No log | 7.0625 | 452 | 0.8267 | 0.3456 | 0.8267 | 0.9092 |
| No log | 7.0938 | 454 | 0.7953 | 0.3179 | 0.7953 | 0.8918 |
| No log | 7.125 | 456 | 0.7526 | 0.3447 | 0.7526 | 0.8675 |
| No log | 7.1562 | 458 | 0.7212 | 0.3192 | 0.7212 | 0.8493 |
| No log | 7.1875 | 460 | 0.7107 | 0.3055 | 0.7107 | 0.8430 |
| No log | 7.2188 | 462 | 0.6912 | 0.2782 | 0.6912 | 0.8314 |
| No log | 7.25 | 464 | 0.6882 | 0.2929 | 0.6882 | 0.8296 |
| No log | 7.2812 | 466 | 0.6906 | 0.2929 | 0.6906 | 0.8310 |
| No log | 7.3125 | 468 | 0.7153 | 0.3192 | 0.7153 | 0.8458 |
| No log | 7.3438 | 470 | 0.7309 | 0.3192 | 0.7309 | 0.8549 |
| No log | 7.375 | 472 | 0.7498 | 0.3549 | 0.7498 | 0.8659 |
| No log | 7.4062 | 474 | 0.7854 | 0.3809 | 0.7854 | 0.8862 |
| No log | 7.4375 | 476 | 0.8003 | 0.3836 | 0.8003 | 0.8946 |
| No log | 7.4688 | 478 | 0.7884 | 0.3809 | 0.7884 | 0.8879 |
| No log | 7.5 | 480 | 0.7898 | 0.3679 | 0.7898 | 0.8887 |
| No log | 7.5312 | 482 | 0.8074 | 0.3606 | 0.8074 | 0.8985 |
| No log | 7.5625 | 484 | 0.8393 | 0.3053 | 0.8393 | 0.9162 |
| No log | 7.5938 | 486 | 0.8494 | 0.3095 | 0.8494 | 0.9216 |
| No log | 7.625 | 488 | 0.8258 | 0.3009 | 0.8258 | 0.9087 |
| No log | 7.6562 | 490 | 0.7835 | 0.3549 | 0.7835 | 0.8851 |
| No log | 7.6875 | 492 | 0.7499 | 0.3124 | 0.7499 | 0.8660 |
| No log | 7.7188 | 494 | 0.7309 | 0.2667 | 0.7309 | 0.8549 |
| No log | 7.75 | 496 | 0.7204 | 0.2667 | 0.7204 | 0.8488 |
| No log | 7.7812 | 498 | 0.7120 | 0.2532 | 0.7120 | 0.8438 |
| 0.3198 | 7.8125 | 500 | 0.7095 | 0.2667 | 0.7095 | 0.8423 |
| 0.3198 | 7.8438 | 502 | 0.7065 | 0.2667 | 0.7065 | 0.8406 |
| 0.3198 | 7.875 | 504 | 0.7112 | 0.3065 | 0.7112 | 0.8434 |
| 0.3198 | 7.9062 | 506 | 0.7263 | 0.3622 | 0.7263 | 0.8522 |
| 0.3198 | 7.9375 | 508 | 0.7398 | 0.3751 | 0.7398 | 0.8601 |
| 0.3198 | 7.9688 | 510 | 0.7579 | 0.3266 | 0.7579 | 0.8706 |
| 0.3198 | 8.0 | 512 | 0.7695 | 0.3310 | 0.7695 | 0.8772 |
| 0.3198 | 8.0312 | 514 | 0.7832 | 0.3310 | 0.7832 | 0.8850 |
| 0.3198 | 8.0625 | 516 | 0.7835 | 0.3310 | 0.7835 | 0.8851 |
| 0.3198 | 8.0938 | 518 | 0.7695 | 0.3310 | 0.7695 | 0.8772 |
| 0.3198 | 8.125 | 520 | 0.7436 | 0.3604 | 0.7436 | 0.8623 |
| 0.3198 | 8.1562 | 522 | 0.7216 | 0.3476 | 0.7216 | 0.8495 |
| 0.3198 | 8.1875 | 524 | 0.7175 | 0.3339 | 0.7175 | 0.8470 |
| 0.3198 | 8.2188 | 526 | 0.7199 | 0.3339 | 0.7199 | 0.8484 |
| 0.3198 | 8.25 | 528 | 0.7276 | 0.3613 | 0.7276 | 0.8530 |
| 0.3198 | 8.2812 | 530 | 0.7407 | 0.3604 | 0.7407 | 0.8606 |
| 0.3198 | 8.3125 | 532 | 0.7496 | 0.3604 | 0.7496 | 0.8658 |
| 0.3198 | 8.3438 | 534 | 0.7549 | 0.3638 | 0.7549 | 0.8689 |
| 0.3198 | 8.375 | 536 | 0.7566 | 0.3638 | 0.7566 | 0.8699 |
| 0.3198 | 8.4062 | 538 | 0.7584 | 0.3638 | 0.7584 | 0.8708 |
| 0.3198 | 8.4375 | 540 | 0.7595 | 0.3638 | 0.7595 | 0.8715 |
| 0.3198 | 8.4688 | 542 | 0.7600 | 0.3310 | 0.7600 | 0.8718 |
| 0.3198 | 8.5 | 544 | 0.7588 | 0.3310 | 0.7588 | 0.8711 |
| 0.3198 | 8.5312 | 546 | 0.7502 | 0.3417 | 0.7502 | 0.8661 |
| 0.3198 | 8.5625 | 548 | 0.7390 | 0.3417 | 0.7390 | 0.8597 |
| 0.3198 | 8.5938 | 550 | 0.7265 | 0.3751 | 0.7265 | 0.8524 |
| 0.3198 | 8.625 | 552 | 0.7200 | 0.3476 | 0.7200 | 0.8485 |
| 0.3198 | 8.6562 | 554 | 0.7228 | 0.3613 | 0.7228 | 0.8502 |
| 0.3198 | 8.6875 | 556 | 0.7224 | 0.3751 | 0.7224 | 0.8500 |
| 0.3198 | 8.7188 | 558 | 0.7233 | 0.3751 | 0.7233 | 0.8505 |
| 0.3198 | 8.75 | 560 | 0.7175 | 0.3613 | 0.7175 | 0.8471 |
| 0.3198 | 8.7812 | 562 | 0.7123 | 0.3613 | 0.7123 | 0.8440 |
| 0.3198 | 8.8125 | 564 | 0.7024 | 0.3622 | 0.7024 | 0.8381 |
| 0.3198 | 8.8438 | 566 | 0.6951 | 0.3485 | 0.6951 | 0.8337 |
| 0.3198 | 8.875 | 568 | 0.6886 | 0.3065 | 0.6886 | 0.8298 |
| 0.3198 | 8.9062 | 570 | 0.6867 | 0.3065 | 0.6867 | 0.8286 |
| 0.3198 | 8.9375 | 572 | 0.6861 | 0.3065 | 0.6861 | 0.8283 |
| 0.3198 | 8.9688 | 574 | 0.6906 | 0.3485 | 0.6906 | 0.8310 |
| 0.3198 | 9.0 | 576 | 0.6935 | 0.3622 | 0.6935 | 0.8328 |
| 0.3198 | 9.0312 | 578 | 0.6906 | 0.3622 | 0.6906 | 0.8310 |
| 0.3198 | 9.0625 | 580 | 0.6916 | 0.3622 | 0.6916 | 0.8316 |
| 0.3198 | 9.0938 | 582 | 0.6987 | 0.3760 | 0.6987 | 0.8359 |
| 0.3198 | 9.125 | 584 | 0.7105 | 0.3760 | 0.7105 | 0.8429 |
| 0.3198 | 9.1562 | 586 | 0.7285 | 0.3751 | 0.7285 | 0.8535 |
| 0.3198 | 9.1875 | 588 | 0.7505 | 0.3392 | 0.7505 | 0.8663 |
| 0.3198 | 9.2188 | 590 | 0.7707 | 0.3392 | 0.7707 | 0.8779 |
| 0.3198 | 9.25 | 592 | 0.7826 | 0.3210 | 0.7826 | 0.8846 |
| 0.3198 | 9.2812 | 594 | 0.7893 | 0.3210 | 0.7893 | 0.8884 |
| 0.3198 | 9.3125 | 596 | 0.7886 | 0.3210 | 0.7886 | 0.8880 |
| 0.3198 | 9.3438 | 598 | 0.7878 | 0.3210 | 0.7878 | 0.8876 |
| 0.3198 | 9.375 | 600 | 0.7834 | 0.3210 | 0.7834 | 0.8851 |
| 0.3198 | 9.4062 | 602 | 0.7755 | 0.3210 | 0.7755 | 0.8806 |
| 0.3198 | 9.4375 | 604 | 0.7643 | 0.3430 | 0.7643 | 0.8743 |
| 0.3198 | 9.4688 | 606 | 0.7548 | 0.3701 | 0.7548 | 0.8688 |
| 0.3198 | 9.5 | 608 | 0.7430 | 0.3701 | 0.7430 | 0.8620 |
| 0.3198 | 9.5312 | 610 | 0.7314 | 0.3638 | 0.7314 | 0.8552 |
| 0.3198 | 9.5625 | 612 | 0.7216 | 0.3751 | 0.7216 | 0.8495 |
| 0.3198 | 9.5938 | 614 | 0.7171 | 0.3613 | 0.7171 | 0.8468 |
| 0.3198 | 9.625 | 616 | 0.7161 | 0.3613 | 0.7161 | 0.8462 |
| 0.3198 | 9.6562 | 618 | 0.7150 | 0.3613 | 0.7150 | 0.8456 |
| 0.3198 | 9.6875 | 620 | 0.7154 | 0.3613 | 0.7154 | 0.8458 |
| 0.3198 | 9.7188 | 622 | 0.7144 | 0.3613 | 0.7144 | 0.8452 |
| 0.3198 | 9.75 | 624 | 0.7121 | 0.3760 | 0.7121 | 0.8439 |
| 0.3198 | 9.7812 | 626 | 0.7120 | 0.3760 | 0.7120 | 0.8438 |
| 0.3198 | 9.8125 | 628 | 0.7138 | 0.3760 | 0.7138 | 0.8449 |
| 0.3198 | 9.8438 | 630 | 0.7159 | 0.3613 | 0.7159 | 0.8461 |
| 0.3198 | 9.875 | 632 | 0.7177 | 0.3613 | 0.7177 | 0.8472 |
| 0.3198 | 9.9062 | 634 | 0.7188 | 0.3647 | 0.7188 | 0.8478 |
| 0.3198 | 9.9375 | 636 | 0.7190 | 0.3647 | 0.7190 | 0.8480 |
| 0.3198 | 9.9688 | 638 | 0.7194 | 0.3647 | 0.7194 | 0.8482 |
| 0.3198 | 10.0 | 640 | 0.7196 | 0.3647 | 0.7196 | 0.8483 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mowen222/task-15-google-gemma-2-2b-it | mowen222 | 2024-11-26T14:54:48Z | 10 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"region:us"
] | null | 2024-11-11T17:14:24Z | ---
base_model: google/gemma-2-2b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Devishri1/test_bert1 | Devishri1 | 2024-11-26T14:46:57Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-11-26T14:46:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deepnet/SN9-C2-Hk4-2 | deepnet | 2024-11-26T14:44:23Z | 222 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T14:41:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ThomET/final_fine-tuned_llama3-8b-241106 | ThomET | 2024-11-26T14:38:52Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-26T14:15:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
furrutiav/roberta_mixtral_nllfg_rubric_mrpc_tf_idf_perplexity | furrutiav | 2024-11-26T14:37:15Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-25T16:12:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
imagepipeline/Anime-Lightning | imagepipeline | 2024-11-26T14:37:10Z | 43 | 0 | diffusers | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-26T14:35:13Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## Anime-Lightning
<img src="https://docs.imagepipeline.io/img/lightning-sdxl-image-pipeline.png" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details -
[](https://imagepipeline.io/models/Anime-Lightning?id=d301d01e-6e35-4dad-ad78-7a9ff1485491/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "d301d01e-6e35-4dad-ad78-7a9ff1485491",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
furrutiav/roberta_mixtral_nllfg_rubric_mrpc_sentence_embd_perplexity | furrutiav | 2024-11-26T14:37:02Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-25T16:07:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CATIE-AQ/NERmembert2-4entities | CATIE-AQ | 2024-11-26T14:30:13Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"fr",
"dataset:CATIE-AQ/frenchNER_4entities",
"arxiv:1910.09700",
"arxiv:2411.08868",
"base_model:almanach/camembertv2-base",
"base_model:finetune:almanach/camembertv2-base",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-21T14:57:06Z | ---
license: mit
base_model: almanach/camembertv2-base
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NERmembert2-4entities
results: []
datasets:
- CATIE-AQ/frenchNER_4entities
language:
- fr
widget:
- text: "Le dévoilement du logo officiel des JO s'est déroulé le 21 octobre 2019 au Grand Rex. Ce nouvel emblème et cette nouvelle typographie ont été conçus par le designer Sylvain Boyer avec les agences Royalties & Ecobranding. Rond, il rassemble trois symboles : une médaille d'or, la flamme olympique et Marianne, symbolisée par un visage de femme mais privée de son bonnet phrygien caractéristique. La typographie dessinée fait référence à l'Art déco, mouvement artistique des années 1920, décennie pendant laquelle ont eu lieu pour la dernière fois les Jeux olympiques à Paris en 1924. Pour la première fois, ce logo sera unique pour les Jeux olympiques et les Jeux paralympiques."
library_name: transformers
pipeline_tag: token-classification
co2_eq_emissions: 25.5
---
# NERmembert2-4entities
## Model Description
We present **NERmembert2-4entities**, which is a [CamemBERT v2 base](https://huggingface.co/almanach/camembertv2-base) fine-tuned for the Name Entity Recognition task for the French language on four French NER datasets for 4 entities (LOC, PER, ORG, MISC).
All these datasets were concatenated and cleaned into a single dataset that we called [frenchNER_4entities](https://huggingface.co/datasets/CATIE-AQ/frenchNER_4entities).
There are a total of **384,773** rows, of which **328,757** are for training, **24,131** for validation and **31,885** for testing.
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/).
## Evaluation results
The evaluation was carried out using the [**evaluate**](https://pypi.org/project/evaluate/) python package.
### frenchNER_4entities
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Parameters</th>
<th><br>Context</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td>
<td><br>110M</td>
<td><br>512 tokens</td>
<td><br>0.971</td>
<td><br>0.947</td>
<td><br>0.902</td>
<td><br>0.663</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td>
<td><br>67.5M</td>
<td><br>512 tokens</td>
<td><br>0.974</td>
<td><br>0.948</td>
<td><br>0.892</td>
<td><br>0.658</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td>
<td><br>110M</td>
<td><br>512 tokens</td>
<td><br>0.978</td>
<td><br>0.958</td>
<td><br>0.903</td>
<td><br>0.814</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert2-4entities (this model)</td>
<td><br>111M</td>
<td><br>1024 tokens</td>
<td><br>0.978</td>
<td><br>0.958</td>
<td><br>0.901</td>
<td><br>0.806</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities</a></td>
<td><br>111M</td>
<td><br>1024 tokens</td>
<td><br>0.979</td>
<td><br>0.961</td>
<td><br>0.915</td>
<td><br>0.812</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td>
<td><br>336M</td>
<td><br>512 tokens</td>
<td><br><b>0.982</b></td>
<td><br><b>0.964</b></td>
<td><br><b>0.919</b></td>
<td><br><b>0.834</b></td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br>Precision</td>
<td><br>0.952</td>
<td><br>0.924</td>
<td><br>0.870</td>
<td><br>0.845</td>
<td><br>0.986</td>
<td><br>0.976</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.990</td>
<td><br>0.972</td>
<td><br>0.938</td>
<td><br>0.546</td>
<td><br>0.992</td>
<td><br>0.976</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.971</td>
<td><br>0.947</td>
<td><br>0.902</td>
<td><br>0.663</td>
<td><br>0.989</td>
<td><br>0.976</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>Precision</td>
<td><br>0.962</td>
<td><br>0.933</td>
<td><br>0.857</td>
<td><br>0.830</td>
<td><br>0.985</td>
<td><br>0.976</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.987</td>
<td><br>0.963</td>
<td><br>0.930</td>
<td><br>0.545</td>
<td><br>0.993</td>
<td><br>0.976</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.974</td>
<td><br>0.948</td>
<td><br>0.892</td>
<td><br>0.658</td>
<td><br>0.989</td>
<td><br>0.976</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>Precision</td>
<td><br>0.973</td>
<td><br>0.951</td>
<td><br>0.888</td>
<td><br>0.850</td>
<td><br>0.993</td>
<td><br>0.984</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.983</td>
<td><br>0.964</td>
<td><br>0.918</td>
<td><br>0.781</td>
<td><br>0.993</td>
<td><br>0.984</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.978</td>
<td><br>0.958</td>
<td><br>0.903</td>
<td><br>0.814</td>
<td><br>0.993</td>
<td><br>0.984</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert2-4entities">NERmembert2-4entities (111M) (this model)</a></td>
<td><br>Precision</td>
<td><br>0.973</td>
<td><br>0.951</td>
<td><br>0.882</td>
<td><br>0.860</td>
<td><br>0.991</td>
<td><br>0.982</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.982</td>
<td><br>0.965</td>
<td><br>0.921</td>
<td><br>0.759</td>
<td><br>0.994</td>
<td><br>0.982</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.978</td>
<td><br>0.958</td>
<td><br>0.901</td>
<td><br>0.806</td>
<td><br>0.992</td>
<td><br>0.982</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br>Precision</td>
<td><br>0.976</td>
<td><br>0.955</td>
<td><br>0.894</td>
<td><br>0.856</td>
<td><br>0.991</td>
<td><br>0.983</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.983</td>
<td><br>0.968</td>
<td><br>0.936</td>
<td><br>0.772</td>
<td><br>0.994</td>
<td><br>0.983</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.979</td>
<td><br>0.961</td>
<td><br>0.915</td>
<td><br>0.812</td>
<td><br>0.992</td>
<td><br>0.983</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br>Precision</td>
<td><br>0.977</td>
<td><br>0.961</td>
<td><br>0.896</td>
<td><br>0.872</td>
<td><br>0.993</td>
<td><br>0.986</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.987</td>
<td><br>0.966</td>
<td><br>0.943</td>
<td><br>0.798</td>
<td><br>0.995</td>
<td><br>0.986</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.982</b></td>
<td><br><b>0.964</b></td>
<td><br><b>0.919</b></td>
<td><br><b>0.834</b></td>
<td><br><b>0.994</b></td>
<td><br><b>0.986</b></td>
</tr>
</tbody>
</table>
</details>
In detail:
### multiconer
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br>0.940</td>
<td><br>0.761</td>
<td><br>0.723</td>
<td><br>0.560</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>0.921</td>
<td><br>0.748</td>
<td><br>0.694</td>
<td><br>0.530</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>0.960</td>
<td><br>0.890</td>
<td><br>0.867</td>
<td><br>0.852</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert2-4entities (111M) (this model)</td>
<td><br>0.964</td>
<td><br>0.888</td>
<td><br>0.864</td>
<td><br>0.850</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br>0.966</td>
<td><br>0.891</td>
<td><br>0.867</td>
<td><br>0.862</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br><b>0.969</b></td>
<td><br><b>0.919</b></td>
<td><br><b>0.904</b></td>
<td><br><b>0.864</b></td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br>Precision</td>
<td><br>0.908</td>
<td><br>0.717</td>
<td><br>0.753</td>
<td><br>0.620</td>
<td><br>0.936</td>
<td><br>0.889</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.811</td>
<td><br>0.696</td>
<td><br>0.511</td>
<td><br>0.938</td>
<td><br>0.889</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.940</td>
<td><br>0.761</td>
<td><br>0.723</td>
<td><br>0.560</td>
<td><br>0.937</td>
<td><br>0.889</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>Precision</td>
<td><br>0.885</td>
<td><br>0.738</td>
<td><br>0.737</td>
<td><br>0.589</td>
<td><br>0.928</td>
<td><br>0.881</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.960</td>
<td><br>0.759</td>
<td><br>0.655</td>
<td><br>0.482</td>
<td><br>0.939</td>
<td><br>0.881</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.921</td>
<td><br>0.748</td>
<td><br>0.694</td>
<td><br>0.530</td>
<td><br>0.934</td>
<td><br>0.881</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>Precision</td>
<td><br>0.954</td>
<td><br>0.893</td>
<td><br>0.851</td>
<td><br>0.849</td>
<td><br>0.979</td>
<td><br>0.954</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.967</td>
<td><br>0.887</td>
<td><br>0.883</td>
<td><br>0.855</td>
<td><br>0.974</td>
<td><br>0.954</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.960</td>
<td><br>0.890</td>
<td><br>0.867</td>
<td><br>0.852</td>
<td><br>0.977</td>
<td><br>0.954</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert2-4entities">NERmembert2-4entities (111M) (this model)</a></td>
<td><br>Precision</td>
<td><br>0.953</td>
<td><br>0.890</td>
<td><br>0.870</td>
<td><br>0.842</td>
<td><br>0.976</td>
<td><br>0.952</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.887</td>
<td><br>0.857</td>
<td><br>0.858</td>
<td><br>0.970</td>
<td><br>0.952</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.964</td>
<td><br>0.888</td>
<td><br>0.864</td>
<td><br>0.850</td>
<td><br>0.973</td>
<td><br>0.952</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br>Precision</td>
<td><br>0.961</td>
<td><br>0.895</td>
<td><br>0.859</td>
<td><br>0.845</td>
<td><br>0.978</td>
<td><br>0.953</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.972</td>
<td><br>0.886</td>
<td><br>0.876</td>
<td><br>0.879</td>
<td><br>0.970</td>
<td><br>0.953</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.966</td>
<td><br>0.891</td>
<td><br>0.867</td>
<td><br>0.862</td>
<td><br>0.974</td>
<td><br>0.953</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br>Precision</td>
<td><br>0.964</td>
<td><br>0.922</td>
<td><br>0.904</td>
<td><br>0.856</td>
<td><br>0.981</td>
<td><br>0.961</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.917</td>
<td><br>0.904</td>
<td><br>0.872</td>
<td><br>0.976</td>
<td><br>0.961</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.969</b></td>
<td><br><b>0.919</b></td>
<td><br><b>0.904</b></td>
<td><br><b>0.864</b></td>
<td><br><b>0.978</b></td>
<td><br><b>0.961</b></td>
</tr>
</tbody>
</table>
</details>
### multinerd
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br>0.962</td>
<td><br>0.934</td>
<td><br>0.888</td>
<td><br>0.419</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>0.972</td>
<td><br>0.938</td>
<td><br>0.884</td>
<td><br>0.430</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>0.985</td>
<td><br>0.973</td>
<td><br>0.938</td>
<td><br>0.770</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert2-4entities (111M) (this model)</td>
<td><br>0.986</td>
<td><br>0.974</td>
<td><br>0.937</td>
<td><br>0.761</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br><b>0.987</b></td>
<td><br><b>0.976</b></td>
<td><br>0.942</td>
<td><br>0.770</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br><b>0.987</b></td>
<td><br>0.976</td>
<td><br>0.948</td>
<td><br><b>0.790</b></td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br>Precision</td>
<td><br>0.931</td>
<td><br>0.893</td>
<td><br>0.827</td>
<td><br>0.725</td>
<td><br>0.979</td>
<td><br>0.966</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.994</td>
<td><br>0.980</td>
<td><br>0.959</td>
<td><br>0.295</td>
<td><br>0.990</td>
<td><br>0.966</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.962</td>
<td><br>0.934</td>
<td><br>0.888</td>
<td><br>0.419</td>
<td><br>0.984</td>
<td><br>0.966</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>Precision</td>
<td><br>0.954</td>
<td><br>0.908</td>
<td><br>0.817</td>
<td><br>0.705</td>
<td><br>0.977</td>
<td><br>0.967</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.991</td>
<td><br>0.969</td>
<td><br>0.963</td>
<td><br>0.310</td>
<td><br>0.990</td>
<td><br>0.967</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.972</td>
<td><br>0.938</td>
<td><br>0.884</td>
<td><br>0.430</td>
<td><br>0.984</td>
<td><br>0.967</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>Precision</td>
<td><br>0.976</td>
<td><br>0.961</td>
<td><br>0.911</td>
<td><br>0.829</td>
<td><br>0.991</td>
<td><br>0.983</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.994</td>
<td><br>0.985</td>
<td><br>0.967</td>
<td><br>0.719</td>
<td><br>0.993</td>
<td><br>0.983</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.985</td>
<td><br>0.973</td>
<td><br>0.938</td>
<td><br>0.770</td>
<td><br>0.992</td>
<td><br>0.983</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert2-4entities">NERmembert2-4entities (111M) (this model)</a></td>
<td><br>Precision</td>
<td><br>0.976</td>
<td><br>0.962</td>
<td><br>0.903</td>
<td><br>0.846</td>
<td><br>0.988</td>
<td><br>0.980</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.995</td>
<td><br>0.986</td>
<td><br>0.974</td>
<td><br>0.692</td>
<td><br>0.992</td>
<td><br>0.980</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.986</td>
<td><br>0.974</td>
<td><br>0.937</td>
<td><br>0.761</td>
<td><br>0.990</td>
<td><br>0.980</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br>Precision</td>
<td><br>0.979</td>
<td><br>0.963</td>
<td><br>0.912</td>
<td><br>0.848</td>
<td><br>0.988</td>
<td><br>0.981</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.996</td>
<td><br>0.989</td>
<td><br>0.975</td>
<td><br>0.705</td>
<td><br>0.992</td>
<td><br>0.981</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.987</td>
<td><br>0.976</td>
<td><br>0.942</td>
<td><br>0.770</td>
<td><br>0.990</td>
<td><br>0.981</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br>Precision</td>
<td><br>0.979</td>
<td><br>0.967</td>
<td><br>0.922</td>
<td><br>0.852</td>
<td><br>0.991</td>
<td><br>0.985</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.996</td>
<td><br>0.986</td>
<td><br>0.974</td>
<td><br>0.736</td>
<td><br>0.994</td>
<td><br>0.985</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.987</b></td>
<td><br>0.976</td>
<td><br>0.948</td>
<td><br><b>0.790</b></td>
<td><br>0.993</td>
<td><br>0.985</td>
</tr>
</tbody>
</table>
</details>
### wikiner
For space reasons, we show only the F1 of the different models. You can see the full results below the table.
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br><b>0.986</b></td>
<td><br><b>0.966</b></td>
<td><br><b>0.938</b></td>
<td><br><b>0.938</b></td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>0.983</td>
<td><br>0.964</td>
<td><br>0.925</td>
<td><br>0.926</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>0.970</td>
<td><br>0.945</td>
<td><br>0.876</td>
<td><br>0.872</td>
</tr>
<tr>
<td rowspan="1"><br>NERmembert2-4entities (111M) (this model)</td>
<td><br>0.968</td>
<td><br>0.945</td>
<td><br>0.874</td>
<td><br>0.871</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br>0.969</td>
<td><br>0.950</td>
<td><br>0.897</td>
<td><br>0.871</td>
</tr>
<tr>
<td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br>0.975</td>
<td><br>0.953</td>
<td><br>0.896</td>
<td><br>0.893</td>
</tr>
</tbody>
</table>
<details>
<summary>Full results</summary>
<table>
<thead>
<tr>
<th><br>Model</th>
<th><br>Metrics</th>
<th><br>PER</th>
<th><br>LOC</th>
<th><br>ORG</th>
<th><br>MISC</th>
<th><br>O</th>
<th><br>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner (110M)</a></td>
<td><br>Precision</td>
<td><br>0.986</td>
<td><br>0.962</td>
<td><br>0.925</td>
<td><br>0.943</td>
<td><br>0.998</td>
<td><br>0.992</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.987</td>
<td><br>0.969</td>
<td><br>0.951</td>
<td><br>0.933</td>
<td><br>0.997</td>
<td><br>0.992</td>
</tr>
<tr>
<td>F1</td>
<td><br><b>0.986</b></td>
<td><br><b>0.966</b></td>
<td><br><b>0.938</b></td>
<td><br><b>0.938</b></td>
<td><br><b>0.998</b></td>
<td><br><b>0.992</b></td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner (67.5M)</a></td>
<td><br>Precision</td>
<td><br>0.982</td>
<td><br>0.964</td>
<td><br>0.910</td>
<td><br>0.942</td>
<td><br>0.997</td>
<td><br>0.991</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.985</td>
<td><br>0.963</td>
<td><br>0.940</td>
<td><br>0.910</td>
<td><br>0.998</td>
<td><br>0.991</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.983</td>
<td><br>0.964</td>
<td><br>0.925</td>
<td><br>0.926</td>
<td><br>0.997</td>
<td><br>0.991</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities (110M)</a></td>
<td><br>Precision</td>
<td><br>0.970</td>
<td><br>0.944</td>
<td><br>0.872</td>
<td><br>0.878</td>
<td><br>0.996</td>
<td><br>0.986</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.969</td>
<td><br>0.947</td>
<td><br>0.880</td>
<td><br>0.866</td>
<td><br>0.996</td>
<td><br>0.986</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.970</td>
<td><br>0.945</td>
<td><br>0.876</td>
<td><br>0.872</td>
<td><br>0.996</td>
<td><br>0.986</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert2-4entities">NERmembert2-4entities (111M) (this model)</a></td>
<td><br>Precision</td>
<td><br>0.970</td>
<td><br>0.942</td>
<td><br>0.865</td>
<td><br>0.883</td>
<td><br>0.996</td>
<td><br>0.985</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.966</td>
<td><br>0.948</td>
<td><br>0.883</td>
<td><br>0.859</td>
<td><br>0.996</td>
<td><br>0.985</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.968</td>
<td><br>0.945</td>
<td><br>0.874</td>
<td><br>0.871</td>
<td><br>0.996</td>
<td><br>0.985</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmemberta-4entities">NERmemberta-4entities (111M)</a></td>
<td><br>Precision</td>
<td><br>0.974</td>
<td><br>0.949</td>
<td><br>0.883</td>
<td><br>0.869</td>
<td><br>0.996</td>
<td><br>0.986</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.965</td>
<td><br>0.951</td>
<td><br>0.910</td>
<td><br>0.872</td>
<td><br>0.996</td>
<td><br>0.986</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.969</td>
<td><br>0.950</td>
<td><br>0.897</td>
<td><br>0.871</td>
<td><br>0.996</td>
<td><br>0.986</td>
</tr>
<tr>
<td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities (336M)</a></td>
<td><br>Precision</td>
<td><br>0.975</td>
<td><br>0.957</td>
<td><br>0.872</td>
<td><br>0.901</td>
<td><br>0.997</td>
<td><br>0.989</td>
</tr>
<tr>
<td><br>Recall</td>
<td><br>0.975</td>
<td><br>0.949</td>
<td><br>0.922</td>
<td><br>0.884</td>
<td><br>0.997</td>
<td><br>0.989</td>
</tr>
<tr>
<td>F1</td>
<td><br>0.975</td>
<td><br>0.953</td>
<td><br>0.896</td>
<td><br>0.893</td>
<td><br>0.997</td>
<td><br>0.989</td>
</tr>
</tbody>
</table>
</details>
## Usage
### Code
```python
from transformers import pipeline
ner = pipeline('token-classification', model='CATIE-AQ/NERmembert2-4entities', tokenizer='CATIE-AQ/NERmembert2-4entities', aggregation_strategy="simple")
result = ner(
"Le dévoilement du logo officiel des JO s'est déroulé le 21 octobre 2019 au Grand Rex. Ce nouvel emblème et cette nouvelle typographie ont été conçus par le designer Sylvain Boyer avec les agences Royalties & Ecobranding. Rond, il rassemble trois symboles : une médaille d'or, la flamme olympique et Marianne, symbolisée par un visage de femme mais privée de son bonnet phrygien caractéristique. La typographie dessinée fait référence à l'Art déco, mouvement artistique des années 1920, décennie pendant laquelle ont eu lieu pour la dernière fois les Jeux olympiques à Paris en 1924. Pour la première fois, ce logo sera unique pour les Jeux olympiques et les Jeux paralympiques."
)
print(result)
```
### Try it through Space
A Space has been created to test the model. It is available [here](https://huggingface.co/spaces/CATIE-AQ/NERmembert).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** A100 PCIe 40/80GB
- **Hours used:** 1h51min
- **Cloud Provider:** Private Infrastructure
- **Carbon Efficiency (kg/kWh):** 0.055 (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) for the day of November 21, 2024.)
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 0.0255 kg eq. CO2
## Citations
### NERmemBERT2-4entities
```
@misc {NERmemberta2024,
author = { {BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { NERmemberta-4entities },
year = 2024,
url = { https://huggingface.co/CATIE-AQ/NERmemberta-4entities },
doi = { 10.57967/hf/3640 },
publisher = { Hugging Face }
}
```
### NERmemBERT
```
@misc {NERmembert2024,
author = { {BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { NERmembert-base-3entities },
year = 2024,
url = { https://huggingface.co/CATIE-AQ/NERmembert-base-4entities },
doi = { 10.57967/hf/1752 },
publisher = { Hugging Face }
}
```
### CamemBERT
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}}
```
### CamemBERT 2.0
```
@misc{antoun2024camembert20smarterfrench,
title={CamemBERT 2.0: A Smarter French Language Model Aged to Perfection},
author={Wissam Antoun and Francis Kulumba and Rian Touchent and Éric de la Clergerie and Benoît Sagot and Djamé Seddah},
year={2024},
eprint={2411.08868},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.08868},
}
```
### multiconer
```
@inproceedings{multiconer2-report,
title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}},
author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin},
booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)},
year={2023},
publisher={Association for Computational Linguistics}}
@article{multiconer2-data,
title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}},
author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin},
year={2023}}
```
### multinerd
```
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812"}
```
### pii-masking-200k
```
@misc {ai4privacy_2023,
author = { {ai4Privacy} },
title = { pii-masking-200k (Revision 1d4c0a1) },
year = 2023,
url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k },
doi = { 10.57967/hf/1532 },
publisher = { Hugging Face }}
```
### wikiner
```
@article{NOTHMAN2013151,
title = {Learning multilingual named entity recognition from Wikipedia},
journal = {Artificial Intelligence},
volume = {194},
pages = {151-175},
year = {2013},
note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources},
issn = {0004-3702},
doi = {https://doi.org/10.1016/j.artint.2012.03.006},
url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276},
author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}}
```
### frenchNER_4entities
```
@misc {frenchNER2024,
author = { {BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { frenchNER_4entities },
year = 2024,
url = { https://huggingface.co/CATIE-AQ/frenchNER_4entities },
doi = { 10.57967/hf/1751 },
publisher = { Hugging Face }
}
```
## License
MIT |
quveex/flan2 | quveex | 2024-11-26T14:28:41Z | 6 | 0 | null | [
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2024-11-26T14:27:52Z | ---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 small
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Small, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
goldandrabbit/test_trainer | goldandrabbit | 2024-11-26T14:25:57Z | 167 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T14:25:13Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0277
- Accuracy: 0.592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.0843 | 0.544 |
| No log | 2.0 | 250 | 1.0067 | 0.578 |
| No log | 3.0 | 375 | 1.0277 | 0.592 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Devishri1/test_bert | Devishri1 | 2024-11-26T14:25:38Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-11-26T14:25:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k10_task2_organization_fold1 | MayBashendy | 2024-11-26T14:24:17Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-26T14:18:31Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k10_task2_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k10_task2_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7550
- Qwk: 0.5459
- Mse: 0.7550
- Rmse: 0.8689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0541 | 2 | 5.6929 | 0.0168 | 5.6929 | 2.3860 |
| No log | 0.1081 | 4 | 3.7957 | -0.0504 | 3.7957 | 1.9482 |
| No log | 0.1622 | 6 | 2.4211 | -0.0634 | 2.4211 | 1.5560 |
| No log | 0.2162 | 8 | 1.2069 | 0.1229 | 1.2069 | 1.0986 |
| No log | 0.2703 | 10 | 0.7455 | 0.3504 | 0.7455 | 0.8634 |
| No log | 0.3243 | 12 | 0.6705 | 0.3964 | 0.6705 | 0.8189 |
| No log | 0.3784 | 14 | 0.6063 | 0.3985 | 0.6063 | 0.7787 |
| No log | 0.4324 | 16 | 1.2184 | 0.2456 | 1.2184 | 1.1038 |
| No log | 0.4865 | 18 | 2.7432 | 0.0645 | 2.7432 | 1.6563 |
| No log | 0.5405 | 20 | 2.7317 | 0.0292 | 2.7317 | 1.6528 |
| No log | 0.5946 | 22 | 1.6697 | 0.1299 | 1.6697 | 1.2922 |
| No log | 0.6486 | 24 | 1.1355 | 0.2109 | 1.1355 | 1.0656 |
| No log | 0.7027 | 26 | 0.9269 | 0.0466 | 0.9269 | 0.9627 |
| No log | 0.7568 | 28 | 0.6940 | 0.0878 | 0.6940 | 0.8331 |
| No log | 0.8108 | 30 | 0.5686 | 0.2062 | 0.5686 | 0.7540 |
| No log | 0.8649 | 32 | 0.6367 | 0.0989 | 0.6367 | 0.7979 |
| No log | 0.9189 | 34 | 0.7941 | 0.0480 | 0.7941 | 0.8911 |
| No log | 0.9730 | 36 | 1.2396 | 0.0849 | 1.2396 | 1.1134 |
| No log | 1.0270 | 38 | 1.8189 | -0.1222 | 1.8189 | 1.3487 |
| No log | 1.0811 | 40 | 1.9419 | -0.0428 | 1.9419 | 1.3935 |
| No log | 1.1351 | 42 | 1.8103 | 0.0193 | 1.8103 | 1.3455 |
| No log | 1.1892 | 44 | 1.5323 | 0.1906 | 1.5323 | 1.2379 |
| No log | 1.2432 | 46 | 1.2593 | 0.1742 | 1.2593 | 1.1222 |
| No log | 1.2973 | 48 | 1.0140 | 0.1961 | 1.0140 | 1.0070 |
| No log | 1.3514 | 50 | 0.9861 | 0.1439 | 0.9861 | 0.9930 |
| No log | 1.4054 | 52 | 0.9284 | 0.1354 | 0.9284 | 0.9636 |
| No log | 1.4595 | 54 | 0.7346 | 0.1160 | 0.7346 | 0.8571 |
| No log | 1.5135 | 56 | 0.6562 | 0.1234 | 0.6562 | 0.8101 |
| No log | 1.5676 | 58 | 0.7188 | 0.1273 | 0.7188 | 0.8478 |
| No log | 1.6216 | 60 | 0.9836 | 0.1598 | 0.9836 | 0.9918 |
| No log | 1.6757 | 62 | 1.2634 | 0.1529 | 1.2634 | 1.1240 |
| No log | 1.7297 | 64 | 1.1770 | 0.2106 | 1.1770 | 1.0849 |
| No log | 1.7838 | 66 | 1.0210 | 0.1480 | 1.0210 | 1.0105 |
| No log | 1.8378 | 68 | 0.8964 | 0.1273 | 0.8964 | 0.9468 |
| No log | 1.8919 | 70 | 0.8348 | 0.0813 | 0.8348 | 0.9137 |
| No log | 1.9459 | 72 | 0.8526 | 0.1273 | 0.8526 | 0.9233 |
| No log | 2.0 | 74 | 0.9668 | 0.1273 | 0.9668 | 0.9832 |
| No log | 2.0541 | 76 | 1.1918 | 0.1409 | 1.1918 | 1.0917 |
| No log | 2.1081 | 78 | 1.2674 | 0.1251 | 1.2674 | 1.1258 |
| No log | 2.1622 | 80 | 1.2560 | 0.1386 | 1.2560 | 1.1207 |
| No log | 2.2162 | 82 | 1.2286 | 0.1249 | 1.2286 | 1.1084 |
| No log | 2.2703 | 84 | 1.1476 | 0.1529 | 1.1476 | 1.0712 |
| No log | 2.3243 | 86 | 1.0467 | 0.2024 | 1.0467 | 1.0231 |
| No log | 2.3784 | 88 | 0.9094 | 0.3233 | 0.9094 | 0.9536 |
| No log | 2.4324 | 90 | 0.8638 | 0.3036 | 0.8638 | 0.9294 |
| No log | 2.4865 | 92 | 0.8111 | 0.4016 | 0.8111 | 0.9006 |
| No log | 2.5405 | 94 | 0.7352 | 0.4171 | 0.7352 | 0.8575 |
| No log | 2.5946 | 96 | 0.7403 | 0.4171 | 0.7403 | 0.8604 |
| No log | 2.6486 | 98 | 0.7732 | 0.4213 | 0.7732 | 0.8793 |
| No log | 2.7027 | 100 | 0.8820 | 0.3990 | 0.8820 | 0.9391 |
| No log | 2.7568 | 102 | 1.1033 | 0.2820 | 1.1033 | 1.0504 |
| No log | 2.8108 | 104 | 1.0757 | 0.2723 | 1.0757 | 1.0372 |
| No log | 2.8649 | 106 | 0.7935 | 0.3854 | 0.7935 | 0.8908 |
| No log | 2.9189 | 108 | 0.6562 | 0.1752 | 0.6562 | 0.8101 |
| No log | 2.9730 | 110 | 0.6886 | 0.0268 | 0.6886 | 0.8298 |
| No log | 3.0270 | 112 | 0.6660 | 0.0463 | 0.6660 | 0.8161 |
| No log | 3.0811 | 114 | 0.6120 | 0.3165 | 0.6120 | 0.7823 |
| No log | 3.1351 | 116 | 0.6552 | 0.3955 | 0.6552 | 0.8095 |
| No log | 3.1892 | 118 | 0.8944 | 0.4193 | 0.8944 | 0.9458 |
| No log | 3.2432 | 120 | 0.9683 | 0.3371 | 0.9683 | 0.9840 |
| No log | 3.2973 | 122 | 0.8323 | 0.4158 | 0.8323 | 0.9123 |
| No log | 3.3514 | 124 | 0.6300 | 0.3615 | 0.6300 | 0.7937 |
| No log | 3.4054 | 126 | 0.5232 | 0.3898 | 0.5232 | 0.7233 |
| No log | 3.4595 | 128 | 0.5052 | 0.4048 | 0.5052 | 0.7108 |
| No log | 3.5135 | 130 | 0.5179 | 0.3640 | 0.5179 | 0.7196 |
| No log | 3.5676 | 132 | 0.5660 | 0.2901 | 0.5660 | 0.7523 |
| No log | 3.6216 | 134 | 0.6349 | 0.3424 | 0.6349 | 0.7968 |
| No log | 3.6757 | 136 | 0.7058 | 0.2846 | 0.7058 | 0.8401 |
| No log | 3.7297 | 138 | 0.7156 | 0.2846 | 0.7156 | 0.8459 |
| No log | 3.7838 | 140 | 0.6752 | 0.3024 | 0.6752 | 0.8217 |
| No log | 3.8378 | 142 | 0.6498 | 0.3556 | 0.6498 | 0.8061 |
| No log | 3.8919 | 144 | 0.5990 | 0.3874 | 0.5990 | 0.7739 |
| No log | 3.9459 | 146 | 0.5524 | 0.3409 | 0.5524 | 0.7432 |
| No log | 4.0 | 148 | 0.5109 | 0.3433 | 0.5109 | 0.7147 |
| No log | 4.0541 | 150 | 0.5000 | 0.4007 | 0.5000 | 0.7071 |
| No log | 4.1081 | 152 | 0.4991 | 0.4007 | 0.4991 | 0.7065 |
| No log | 4.1622 | 154 | 0.5548 | 0.4646 | 0.5548 | 0.7448 |
| No log | 4.2162 | 156 | 0.6463 | 0.5051 | 0.6463 | 0.8039 |
| No log | 4.2703 | 158 | 0.6549 | 0.5164 | 0.6549 | 0.8093 |
| No log | 4.3243 | 160 | 0.5922 | 0.4650 | 0.5922 | 0.7695 |
| No log | 4.3784 | 162 | 0.5201 | 0.4561 | 0.5201 | 0.7211 |
| No log | 4.4324 | 164 | 0.5091 | 0.5645 | 0.5091 | 0.7135 |
| No log | 4.4865 | 166 | 0.5208 | 0.5633 | 0.5208 | 0.7217 |
| No log | 4.5405 | 168 | 0.5425 | 0.5296 | 0.5425 | 0.7366 |
| No log | 4.5946 | 170 | 0.5928 | 0.5098 | 0.5928 | 0.7699 |
| No log | 4.6486 | 172 | 0.6982 | 0.4571 | 0.6982 | 0.8356 |
| No log | 4.7027 | 174 | 0.8857 | 0.4358 | 0.8857 | 0.9411 |
| No log | 4.7568 | 176 | 1.0183 | 0.3789 | 1.0183 | 1.0091 |
| No log | 4.8108 | 178 | 1.0296 | 0.3789 | 1.0296 | 1.0147 |
| No log | 4.8649 | 180 | 0.8922 | 0.4258 | 0.8922 | 0.9446 |
| No log | 4.9189 | 182 | 0.7415 | 0.4340 | 0.7415 | 0.8611 |
| No log | 4.9730 | 184 | 0.6244 | 0.4828 | 0.6244 | 0.7902 |
| No log | 5.0270 | 186 | 0.5802 | 0.5005 | 0.5802 | 0.7617 |
| No log | 5.0811 | 188 | 0.5859 | 0.5005 | 0.5859 | 0.7654 |
| No log | 5.1351 | 190 | 0.6190 | 0.4830 | 0.6190 | 0.7868 |
| No log | 5.1892 | 192 | 0.6819 | 0.4928 | 0.6819 | 0.8258 |
| No log | 5.2432 | 194 | 0.7083 | 0.4796 | 0.7083 | 0.8416 |
| No log | 5.2973 | 196 | 0.6724 | 0.5163 | 0.6724 | 0.8200 |
| No log | 5.3514 | 198 | 0.6181 | 0.4584 | 0.6181 | 0.7862 |
| No log | 5.4054 | 200 | 0.6118 | 0.5422 | 0.6118 | 0.7822 |
| No log | 5.4595 | 202 | 0.6340 | 0.5300 | 0.6340 | 0.7962 |
| No log | 5.5135 | 204 | 0.6590 | 0.5297 | 0.6590 | 0.8118 |
| No log | 5.5676 | 206 | 0.6986 | 0.5488 | 0.6986 | 0.8358 |
| No log | 5.6216 | 208 | 0.7406 | 0.5459 | 0.7406 | 0.8606 |
| No log | 5.6757 | 210 | 0.7644 | 0.5347 | 0.7644 | 0.8743 |
| No log | 5.7297 | 212 | 0.7944 | 0.5133 | 0.7944 | 0.8913 |
| No log | 5.7838 | 214 | 0.8237 | 0.4808 | 0.8237 | 0.9076 |
| No log | 5.8378 | 216 | 0.8410 | 0.4032 | 0.8410 | 0.9171 |
| No log | 5.8919 | 218 | 0.8483 | 0.4638 | 0.8483 | 0.9210 |
| No log | 5.9459 | 220 | 0.8522 | 0.4645 | 0.8522 | 0.9231 |
| No log | 6.0 | 222 | 0.8327 | 0.5335 | 0.8327 | 0.9125 |
| No log | 6.0541 | 224 | 0.8150 | 0.5170 | 0.8150 | 0.9028 |
| No log | 6.1081 | 226 | 0.7994 | 0.5085 | 0.7994 | 0.8941 |
| No log | 6.1622 | 228 | 0.7608 | 0.5175 | 0.7608 | 0.8723 |
| No log | 6.2162 | 230 | 0.7277 | 0.4964 | 0.7277 | 0.8530 |
| No log | 6.2703 | 232 | 0.7029 | 0.5063 | 0.7029 | 0.8384 |
| No log | 6.3243 | 234 | 0.6840 | 0.5400 | 0.6840 | 0.8270 |
| No log | 6.3784 | 236 | 0.6801 | 0.4979 | 0.6801 | 0.8247 |
| No log | 6.4324 | 238 | 0.6943 | 0.4980 | 0.6943 | 0.8332 |
| No log | 6.4865 | 240 | 0.7060 | 0.4980 | 0.7060 | 0.8402 |
| No log | 6.5405 | 242 | 0.7361 | 0.4959 | 0.7361 | 0.8579 |
| No log | 6.5946 | 244 | 0.7468 | 0.4873 | 0.7468 | 0.8642 |
| No log | 6.6486 | 246 | 0.7588 | 0.5081 | 0.7588 | 0.8711 |
| No log | 6.7027 | 248 | 0.7599 | 0.5081 | 0.7599 | 0.8717 |
| No log | 6.7568 | 250 | 0.7519 | 0.5252 | 0.7519 | 0.8671 |
| No log | 6.8108 | 252 | 0.7436 | 0.5034 | 0.7436 | 0.8623 |
| No log | 6.8649 | 254 | 0.7459 | 0.5113 | 0.7459 | 0.8637 |
| No log | 6.9189 | 256 | 0.7410 | 0.5323 | 0.7410 | 0.8608 |
| No log | 6.9730 | 258 | 0.7277 | 0.5113 | 0.7277 | 0.8530 |
| No log | 7.0270 | 260 | 0.7237 | 0.5030 | 0.7237 | 0.8507 |
| No log | 7.0811 | 262 | 0.7356 | 0.5061 | 0.7356 | 0.8577 |
| No log | 7.1351 | 264 | 0.7403 | 0.5067 | 0.7403 | 0.8604 |
| No log | 7.1892 | 266 | 0.7351 | 0.5061 | 0.7351 | 0.8574 |
| No log | 7.2432 | 268 | 0.7327 | 0.5061 | 0.7327 | 0.8560 |
| No log | 7.2973 | 270 | 0.7279 | 0.5155 | 0.7279 | 0.8532 |
| No log | 7.3514 | 272 | 0.7205 | 0.5163 | 0.7205 | 0.8488 |
| No log | 7.4054 | 274 | 0.7113 | 0.5478 | 0.7113 | 0.8434 |
| No log | 7.4595 | 276 | 0.7059 | 0.5721 | 0.7059 | 0.8402 |
| No log | 7.5135 | 278 | 0.7050 | 0.5206 | 0.7050 | 0.8396 |
| No log | 7.5676 | 280 | 0.6901 | 0.4996 | 0.6901 | 0.8307 |
| No log | 7.6216 | 282 | 0.6742 | 0.5407 | 0.6742 | 0.8211 |
| No log | 7.6757 | 284 | 0.6730 | 0.5400 | 0.6730 | 0.8204 |
| No log | 7.7297 | 286 | 0.6754 | 0.5055 | 0.6754 | 0.8218 |
| No log | 7.7838 | 288 | 0.6846 | 0.5376 | 0.6846 | 0.8274 |
| No log | 7.8378 | 290 | 0.6972 | 0.5376 | 0.6972 | 0.8350 |
| No log | 7.8919 | 292 | 0.7148 | 0.5366 | 0.7148 | 0.8455 |
| No log | 7.9459 | 294 | 0.7280 | 0.5351 | 0.7280 | 0.8532 |
| No log | 8.0 | 296 | 0.7420 | 0.5243 | 0.7420 | 0.8614 |
| No log | 8.0541 | 298 | 0.7519 | 0.5233 | 0.7519 | 0.8671 |
| No log | 8.1081 | 300 | 0.7572 | 0.5141 | 0.7572 | 0.8702 |
| No log | 8.1622 | 302 | 0.7571 | 0.5450 | 0.7571 | 0.8701 |
| No log | 8.2162 | 304 | 0.7541 | 0.5450 | 0.7541 | 0.8684 |
| No log | 8.2703 | 306 | 0.7484 | 0.5459 | 0.7484 | 0.8651 |
| No log | 8.3243 | 308 | 0.7468 | 0.5459 | 0.7468 | 0.8642 |
| No log | 8.3784 | 310 | 0.7590 | 0.5459 | 0.7590 | 0.8712 |
| No log | 8.4324 | 312 | 0.7698 | 0.5437 | 0.7698 | 0.8774 |
| No log | 8.4865 | 314 | 0.7794 | 0.5124 | 0.7794 | 0.8828 |
| No log | 8.5405 | 316 | 0.7936 | 0.4709 | 0.7936 | 0.8908 |
| No log | 8.5946 | 318 | 0.8003 | 0.4699 | 0.8003 | 0.8946 |
| No log | 8.6486 | 320 | 0.8010 | 0.4699 | 0.8010 | 0.8950 |
| No log | 8.7027 | 322 | 0.7944 | 0.4804 | 0.7944 | 0.8913 |
| No log | 8.7568 | 324 | 0.7895 | 0.4936 | 0.7895 | 0.8885 |
| No log | 8.8108 | 326 | 0.7852 | 0.5233 | 0.7852 | 0.8861 |
| No log | 8.8649 | 328 | 0.7817 | 0.5158 | 0.7817 | 0.8842 |
| No log | 8.9189 | 330 | 0.7839 | 0.4977 | 0.7839 | 0.8854 |
| No log | 8.9730 | 332 | 0.7884 | 0.5178 | 0.7884 | 0.8879 |
| No log | 9.0270 | 334 | 0.7910 | 0.5281 | 0.7910 | 0.8894 |
| No log | 9.0811 | 336 | 0.7891 | 0.5178 | 0.7891 | 0.8883 |
| No log | 9.1351 | 338 | 0.7883 | 0.5178 | 0.7883 | 0.8879 |
| No log | 9.1892 | 340 | 0.7871 | 0.4977 | 0.7871 | 0.8872 |
| No log | 9.2432 | 342 | 0.7844 | 0.5068 | 0.7844 | 0.8856 |
| No log | 9.2973 | 344 | 0.7804 | 0.5068 | 0.7804 | 0.8834 |
| No log | 9.3514 | 346 | 0.7744 | 0.5459 | 0.7744 | 0.8800 |
| No log | 9.4054 | 348 | 0.7703 | 0.5464 | 0.7703 | 0.8777 |
| No log | 9.4595 | 350 | 0.7672 | 0.5464 | 0.7672 | 0.8759 |
| No log | 9.5135 | 352 | 0.7637 | 0.5464 | 0.7637 | 0.8739 |
| No log | 9.5676 | 354 | 0.7620 | 0.5464 | 0.7620 | 0.8729 |
| No log | 9.6216 | 356 | 0.7591 | 0.5464 | 0.7591 | 0.8713 |
| No log | 9.6757 | 358 | 0.7575 | 0.5464 | 0.7575 | 0.8704 |
| No log | 9.7297 | 360 | 0.7563 | 0.5464 | 0.7563 | 0.8697 |
| No log | 9.7838 | 362 | 0.7557 | 0.5464 | 0.7557 | 0.8693 |
| No log | 9.8378 | 364 | 0.7551 | 0.5464 | 0.7551 | 0.8690 |
| No log | 9.8919 | 366 | 0.7547 | 0.5459 | 0.7547 | 0.8688 |
| No log | 9.9459 | 368 | 0.7549 | 0.5459 | 0.7549 | 0.8689 |
| No log | 10.0 | 370 | 0.7550 | 0.5459 | 0.7550 | 0.8689 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
breezedeus/cnocr-ppocr-ch_PP-OCRv4_server | breezedeus | 2024-11-26T14:23:46Z | 66 | 0 | null | [
"onnx",
"OCR",
"STD",
"Chinese",
"English",
"Optical Character Recognition",
"license:apache-2.0",
"region:us"
] | null | 2024-11-26T14:20:56Z | ---
license: apache-2.0
tags:
- OCR
- STD
- Chinese
- English
- Optical Character Recognition
---
# Text Recognition Model for CnOCR
CnOCR: Awesome Chinese/English OCR Python toolkits based on PyTorch. It comes with 20+ well-trained models for different application scenarios and can be used directly after installation.
CnOCR:基于 PyTorch 的出色中文 / 英文 OCR Python 工具包。它带有 20 多个针对不同应用场景进行良好训练的模型,安装后可直接使用。
See more information: [CnOCR](https://github.com/breezedeus/CnOCR).
|
platzi/platzi-vit-model-Nicolas | platzi | 2024-11-26T14:17:29Z | 198 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-26T14:13:20Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: platzi-vit-model-Nicolas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-Nicolas
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1528
- Accuracy: 0.9624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0662 | 3.8462 | 500 | 0.1528 | 0.9624 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits