modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/ChatML-Nemo-Pro-V2-GGUF | mradermacher | 2024-11-25T16:48:12Z | 14 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/ChatML-Nemo-Pro-V2",
"base_model:quantized:win10/ChatML-Nemo-Pro-V2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T12:24:28Z | ---
base_model: win10/ChatML-Nemo-Pro-V2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/win10/ChatML-Nemo-Pro-V2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ChatML-Nemo-Pro-V2-GGUF/resolve/main/ChatML-Nemo-Pro-V2.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_M-GGUF | Triangle104 | 2024-11-25T16:45:58Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T10:06:54Z | ---
library_name: transformers
license: llama3.2
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.2-3B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) for more details on the model.
---
Model details:
-
This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_m.gguf -c 2048
```
|
furrutiav/roberta_mixtral_nllfg_rubric_qnli_sentence_embd_perplexity | furrutiav | 2024-11-25T16:45:15Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-25T16:44:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
briannlongzhao/smile_textual_inversion | briannlongzhao | 2024-11-25T16:45:11Z | 8 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-10T13:23:29Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - briannlongzhao/smile_textual_inversion
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_S-GGUF | Triangle104 | 2024-11-25T16:45:10Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T10:00:05Z | ---
library_name: transformers
license: llama3.2
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.2-3B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) for more details on the model.
---
Model details:
-
This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q5_k_s.gguf -c 2048
```
|
omarelsayeed/LayoutReader85Large | omarelsayeed | 2024-11-25T16:42:42Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T16:42:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Llama-3.2-3B-Instruct-abliterated-Q4_K_S-GGUF | Triangle104 | 2024-11-25T16:40:44Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T09:57:07Z | ---
library_name: transformers
license: llama3.2
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.2-3B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) for more details on the model.
---
Model details:
-
This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-abliterated-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-abliterated-q4_k_s.gguf -c 2048
```
|
maxg73872/biobert-v1.1-finetuned-medmcqa-2024-11-25-T16-21-48 | maxg73872 | 2024-11-25T16:37:26Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-25T16:37:07Z | ---
library_name: transformers
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert-v1.1-finetuned-medmcqa-2024-11-25-T16-21-48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-finetuned-medmcqa-2024-11-25-T16-21-48
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0952
- Accuracy: 0.6190
- F1: 0.6142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000159
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 1.0926 | 0.9978 | 57 | 1.0952 | 0.6190 | 0.6142 |
| 0.8087 | 1.9956 | 114 | 0.8597 | 0.5952 | 0.6151 |
| 0.5811 | 2.9934 | 171 | 0.8742 | 0.6190 | 0.6371 |
| 0.368 | 3.9912 | 228 | 1.3578 | 0.5714 | 0.5839 |
| 0.1739 | 4.9891 | 285 | 1.6110 | 0.5952 | 0.6032 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Yntec/OpenLexica | Yntec | 2024-11-25T16:33:44Z | 368 | 2 | diffusers | [
"diffusers",
"safetensors",
"anime",
"art",
"general",
"MichaelFan",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-13T18:44:51Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- art
- general
- MichaelFan
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Open Lexica
Sample images and prompt:


Pretty Cute Girl, sitting, Detailed Chibi Eyes, holding baseball, beautiful detailed legs, baseball uniform, gorgeous detailed hair, Magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes βby Kyoani and artgerm and leyendecker and Dave Rapoza
Original page:
https://civitai.com/models/58609/openlexica |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k25_task5_organization_fold1 | MayBashendy | 2024-11-25T16:31:05Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T16:19:16Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k25_task5_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k25_task5_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6034
- Qwk: 0.6767
- Mse: 0.6034
- Rmse: 0.7768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0171 | 2 | 3.0073 | 0.1165 | 3.0073 | 1.7342 |
| No log | 0.0342 | 4 | 1.8180 | -0.0644 | 1.8180 | 1.3483 |
| No log | 0.0513 | 6 | 1.3918 | 0.2012 | 1.3918 | 1.1798 |
| No log | 0.0684 | 8 | 1.1411 | 0.2049 | 1.1411 | 1.0682 |
| No log | 0.0855 | 10 | 1.0850 | 0.1977 | 1.0850 | 1.0416 |
| No log | 0.1026 | 12 | 0.9733 | 0.2933 | 0.9733 | 0.9866 |
| No log | 0.1197 | 14 | 1.0467 | 0.1571 | 1.0467 | 1.0231 |
| No log | 0.1368 | 16 | 1.1359 | 0.0614 | 1.1359 | 1.0658 |
| No log | 0.1538 | 18 | 1.2224 | -0.0039 | 1.2224 | 1.1056 |
| No log | 0.1709 | 20 | 1.2582 | 0.0 | 1.2582 | 1.1217 |
| No log | 0.1880 | 22 | 1.2949 | 0.0 | 1.2949 | 1.1379 |
| No log | 0.2051 | 24 | 1.1785 | 0.1571 | 1.1785 | 1.0856 |
| No log | 0.2222 | 26 | 1.1101 | 0.1571 | 1.1102 | 1.0536 |
| No log | 0.2393 | 28 | 1.1902 | 0.2279 | 1.1902 | 1.0910 |
| No log | 0.2564 | 30 | 1.2669 | 0.2604 | 1.2669 | 1.1256 |
| No log | 0.2735 | 32 | 1.4043 | 0.2408 | 1.4043 | 1.1850 |
| No log | 0.2906 | 34 | 1.3398 | 0.2408 | 1.3398 | 1.1575 |
| No log | 0.3077 | 36 | 1.0573 | 0.3195 | 1.0573 | 1.0282 |
| No log | 0.3248 | 38 | 0.9570 | 0.3354 | 0.9570 | 0.9782 |
| No log | 0.3419 | 40 | 0.8428 | 0.2933 | 0.8428 | 0.9180 |
| No log | 0.3590 | 42 | 1.0809 | 0.1571 | 1.0809 | 1.0396 |
| No log | 0.3761 | 44 | 1.3041 | 0.0809 | 1.3041 | 1.1420 |
| No log | 0.3932 | 46 | 1.0454 | 0.2279 | 1.0454 | 1.0225 |
| No log | 0.4103 | 48 | 0.8442 | 0.2857 | 0.8442 | 0.9188 |
| No log | 0.4274 | 50 | 0.8376 | 0.4345 | 0.8376 | 0.9152 |
| No log | 0.4444 | 52 | 0.8841 | 0.4345 | 0.8841 | 0.9402 |
| No log | 0.4615 | 54 | 0.8991 | 0.4236 | 0.8991 | 0.9482 |
| No log | 0.4786 | 56 | 0.8236 | 0.3443 | 0.8236 | 0.9075 |
| No log | 0.4957 | 58 | 0.7684 | 0.2933 | 0.7684 | 0.8766 |
| No log | 0.5128 | 60 | 0.7643 | 0.4020 | 0.7643 | 0.8742 |
| No log | 0.5299 | 62 | 0.8211 | 0.5349 | 0.8211 | 0.9061 |
| No log | 0.5470 | 64 | 0.8862 | 0.3594 | 0.8862 | 0.9414 |
| No log | 0.5641 | 66 | 0.8854 | 0.3448 | 0.8854 | 0.9410 |
| No log | 0.5812 | 68 | 0.8494 | 0.2933 | 0.8494 | 0.9216 |
| No log | 0.5983 | 70 | 0.8517 | 0.2933 | 0.8517 | 0.9229 |
| No log | 0.6154 | 72 | 0.8255 | 0.2933 | 0.8255 | 0.9086 |
| No log | 0.6325 | 74 | 0.8177 | 0.2933 | 0.8177 | 0.9043 |
| No log | 0.6496 | 76 | 0.8439 | 0.2933 | 0.8439 | 0.9187 |
| No log | 0.6667 | 78 | 0.8524 | 0.2933 | 0.8524 | 0.9232 |
| No log | 0.6838 | 80 | 0.8946 | 0.2933 | 0.8946 | 0.9458 |
| No log | 0.7009 | 82 | 0.9082 | 0.2933 | 0.9082 | 0.9530 |
| No log | 0.7179 | 84 | 0.8540 | 0.2933 | 0.8540 | 0.9241 |
| No log | 0.7350 | 86 | 0.7954 | 0.3443 | 0.7954 | 0.8919 |
| No log | 0.7521 | 88 | 0.7785 | 0.2933 | 0.7785 | 0.8823 |
| No log | 0.7692 | 90 | 0.8274 | 0.4020 | 0.8274 | 0.9096 |
| No log | 0.7863 | 92 | 0.9377 | 0.3691 | 0.9377 | 0.9683 |
| No log | 0.8034 | 94 | 1.0009 | 0.2058 | 1.0009 | 1.0004 |
| No log | 0.8205 | 96 | 1.0162 | 0.1506 | 1.0162 | 1.0081 |
| No log | 0.8376 | 98 | 1.0419 | 0.0 | 1.0419 | 1.0207 |
| No log | 0.8547 | 100 | 1.0303 | 0.0 | 1.0303 | 1.0151 |
| No log | 0.8718 | 102 | 1.0275 | 0.2279 | 1.0275 | 1.0137 |
| No log | 0.8889 | 104 | 1.0277 | 0.2279 | 1.0277 | 1.0138 |
| No log | 0.9060 | 106 | 0.9340 | 0.2279 | 0.9340 | 0.9664 |
| No log | 0.9231 | 108 | 0.8287 | 0.2933 | 0.8287 | 0.9103 |
| No log | 0.9402 | 110 | 0.7958 | 0.2933 | 0.7958 | 0.8921 |
| No log | 0.9573 | 112 | 0.7918 | 0.2933 | 0.7918 | 0.8898 |
| No log | 0.9744 | 114 | 0.7829 | 0.2933 | 0.7829 | 0.8848 |
| No log | 0.9915 | 116 | 0.7885 | 0.2933 | 0.7885 | 0.8880 |
| No log | 1.0085 | 118 | 0.8078 | 0.2933 | 0.8078 | 0.8988 |
| No log | 1.0256 | 120 | 0.8271 | 0.2933 | 0.8271 | 0.9095 |
| No log | 1.0427 | 122 | 0.8285 | 0.4178 | 0.8285 | 0.9102 |
| No log | 1.0598 | 124 | 0.8196 | 0.4178 | 0.8196 | 0.9053 |
| No log | 1.0769 | 126 | 0.8209 | 0.4178 | 0.8209 | 0.9060 |
| No log | 1.0940 | 128 | 0.8216 | 0.4178 | 0.8216 | 0.9064 |
| No log | 1.1111 | 130 | 0.8706 | 0.2933 | 0.8706 | 0.9331 |
| No log | 1.1282 | 132 | 0.8947 | 0.2933 | 0.8947 | 0.9459 |
| No log | 1.1453 | 134 | 0.8705 | 0.2933 | 0.8705 | 0.9330 |
| No log | 1.1624 | 136 | 0.8045 | 0.4178 | 0.8045 | 0.8970 |
| No log | 1.1795 | 138 | 0.7641 | 0.5143 | 0.7641 | 0.8741 |
| No log | 1.1966 | 140 | 0.7644 | 0.4169 | 0.7644 | 0.8743 |
| No log | 1.2137 | 142 | 0.7755 | 0.5170 | 0.7755 | 0.8806 |
| No log | 1.2308 | 144 | 0.7658 | 0.5665 | 0.7658 | 0.8751 |
| No log | 1.2479 | 146 | 0.7662 | 0.5665 | 0.7662 | 0.8753 |
| No log | 1.2650 | 148 | 0.8024 | 0.5550 | 0.8024 | 0.8958 |
| No log | 1.2821 | 150 | 0.8514 | 0.5148 | 0.8514 | 0.9227 |
| No log | 1.2991 | 152 | 0.8118 | 0.5224 | 0.8118 | 0.9010 |
| No log | 1.3162 | 154 | 0.7983 | 0.4649 | 0.7983 | 0.8935 |
| No log | 1.3333 | 156 | 0.8211 | 0.3355 | 0.8211 | 0.9062 |
| No log | 1.3504 | 158 | 0.7757 | 0.4649 | 0.7757 | 0.8808 |
| No log | 1.3675 | 160 | 0.7376 | 0.5584 | 0.7376 | 0.8589 |
| No log | 1.3846 | 162 | 0.7861 | 0.6034 | 0.7861 | 0.8866 |
| No log | 1.4017 | 164 | 0.8454 | 0.5856 | 0.8454 | 0.9195 |
| No log | 1.4188 | 166 | 0.8317 | 0.5856 | 0.8317 | 0.9120 |
| No log | 1.4359 | 168 | 0.7359 | 0.6034 | 0.7359 | 0.8578 |
| No log | 1.4530 | 170 | 0.7278 | 0.6034 | 0.7278 | 0.8531 |
| No log | 1.4701 | 172 | 0.7116 | 0.6034 | 0.7116 | 0.8436 |
| No log | 1.4872 | 174 | 0.6963 | 0.6034 | 0.6963 | 0.8345 |
| No log | 1.5043 | 176 | 0.6498 | 0.6632 | 0.6498 | 0.8061 |
| No log | 1.5214 | 178 | 0.6254 | 0.6632 | 0.6254 | 0.7908 |
| No log | 1.5385 | 180 | 0.6317 | 0.6632 | 0.6317 | 0.7948 |
| No log | 1.5556 | 182 | 0.6371 | 0.6632 | 0.6371 | 0.7982 |
| No log | 1.5726 | 184 | 0.6437 | 0.6456 | 0.6437 | 0.8023 |
| No log | 1.5897 | 186 | 0.6700 | 0.6287 | 0.6700 | 0.8185 |
| No log | 1.6068 | 188 | 0.7038 | 0.5642 | 0.7038 | 0.8389 |
| No log | 1.6239 | 190 | 0.6837 | 0.5663 | 0.6837 | 0.8269 |
| No log | 1.6410 | 192 | 0.7037 | 0.5466 | 0.7037 | 0.8389 |
| No log | 1.6581 | 194 | 0.7028 | 0.5320 | 0.7028 | 0.8383 |
| No log | 1.6752 | 196 | 0.6888 | 0.5663 | 0.6888 | 0.8299 |
| No log | 1.6923 | 198 | 0.7543 | 0.6026 | 0.7543 | 0.8685 |
| No log | 1.7094 | 200 | 0.8499 | 0.6154 | 0.8499 | 0.9219 |
| No log | 1.7265 | 202 | 0.8218 | 0.6154 | 0.8218 | 0.9065 |
| No log | 1.7436 | 204 | 0.7219 | 0.6782 | 0.7219 | 0.8497 |
| No log | 1.7607 | 206 | 0.6755 | 0.6782 | 0.6755 | 0.8219 |
| No log | 1.7778 | 208 | 0.6659 | 0.6009 | 0.6659 | 0.8160 |
| No log | 1.7949 | 210 | 0.7198 | 0.6009 | 0.7198 | 0.8484 |
| No log | 1.8120 | 212 | 0.7247 | 0.6145 | 0.7247 | 0.8513 |
| No log | 1.8291 | 214 | 0.6671 | 0.6009 | 0.6671 | 0.8167 |
| No log | 1.8462 | 216 | 0.6466 | 0.6028 | 0.6466 | 0.8041 |
| No log | 1.8632 | 218 | 0.6447 | 0.6163 | 0.6447 | 0.8029 |
| No log | 1.8803 | 220 | 0.7605 | 0.6767 | 0.7605 | 0.8721 |
| No log | 1.8974 | 222 | 0.8090 | 0.6154 | 0.8090 | 0.8995 |
| No log | 1.9145 | 224 | 0.7851 | 0.6145 | 0.7851 | 0.8860 |
| No log | 1.9316 | 226 | 0.8537 | 0.4885 | 0.8537 | 0.9240 |
| No log | 1.9487 | 228 | 0.9116 | 0.575 | 0.9116 | 0.9548 |
| No log | 1.9658 | 230 | 0.8276 | 0.4638 | 0.8276 | 0.9097 |
| No log | 1.9829 | 232 | 0.7880 | 0.4987 | 0.7880 | 0.8877 |
| No log | 2.0 | 234 | 0.7685 | 0.5466 | 0.7685 | 0.8766 |
| No log | 2.0171 | 236 | 0.7624 | 0.5642 | 0.7624 | 0.8732 |
| No log | 2.0342 | 238 | 0.7900 | 0.5642 | 0.7900 | 0.8888 |
| No log | 2.0513 | 240 | 0.7752 | 0.5642 | 0.7752 | 0.8804 |
| No log | 2.0684 | 242 | 0.8065 | 0.4987 | 0.8065 | 0.8980 |
| No log | 2.0855 | 244 | 0.8088 | 0.4430 | 0.8088 | 0.8993 |
| No log | 2.1026 | 246 | 0.8880 | 0.5642 | 0.8880 | 0.9424 |
| No log | 2.1197 | 248 | 0.8580 | 0.5642 | 0.8580 | 0.9263 |
| No log | 2.1368 | 250 | 0.7447 | 0.5642 | 0.7447 | 0.8630 |
| No log | 2.1538 | 252 | 0.6994 | 0.5813 | 0.6994 | 0.8363 |
| No log | 2.1709 | 254 | 0.7008 | 0.5641 | 0.7008 | 0.8372 |
| No log | 2.1880 | 256 | 0.6880 | 0.5813 | 0.6880 | 0.8295 |
| No log | 2.2051 | 258 | 0.7502 | 0.5642 | 0.7502 | 0.8661 |
| No log | 2.2222 | 260 | 0.8216 | 0.6154 | 0.8216 | 0.9064 |
| No log | 2.2393 | 262 | 0.7791 | 0.6154 | 0.7791 | 0.8827 |
| No log | 2.2564 | 264 | 0.7051 | 0.6154 | 0.7051 | 0.8397 |
| No log | 2.2735 | 266 | 0.6409 | 0.6145 | 0.6409 | 0.8006 |
| No log | 2.2906 | 268 | 0.6486 | 0.5949 | 0.6486 | 0.8053 |
| No log | 2.3077 | 270 | 0.6782 | 0.6247 | 0.6782 | 0.8235 |
| No log | 2.3248 | 272 | 0.7435 | 0.575 | 0.7435 | 0.8623 |
| No log | 2.3419 | 274 | 0.7682 | 0.575 | 0.7682 | 0.8765 |
| No log | 2.3590 | 276 | 0.7858 | 0.575 | 0.7858 | 0.8865 |
| No log | 2.3761 | 278 | 0.7699 | 0.5148 | 0.7699 | 0.8774 |
| No log | 2.3932 | 280 | 0.6988 | 0.5856 | 0.6988 | 0.8359 |
| No log | 2.4103 | 282 | 0.6613 | 0.6122 | 0.6613 | 0.8132 |
| No log | 2.4274 | 284 | 0.6928 | 0.5685 | 0.6928 | 0.8324 |
| No log | 2.4444 | 286 | 0.6653 | 0.6173 | 0.6653 | 0.8157 |
| No log | 2.4615 | 288 | 0.6684 | 0.6034 | 0.6684 | 0.8176 |
| No log | 2.4786 | 290 | 0.8378 | 0.575 | 0.8378 | 0.9153 |
| No log | 2.4957 | 292 | 0.9367 | 0.5804 | 0.9367 | 0.9678 |
| No log | 2.5128 | 294 | 0.8965 | 0.5804 | 0.8965 | 0.9468 |
| No log | 2.5299 | 296 | 0.7529 | 0.6178 | 0.7529 | 0.8677 |
| No log | 2.5470 | 298 | 0.6419 | 0.6305 | 0.6419 | 0.8012 |
| No log | 2.5641 | 300 | 0.6152 | 0.6811 | 0.6152 | 0.7844 |
| No log | 2.5812 | 302 | 0.6281 | 0.6652 | 0.6281 | 0.7925 |
| No log | 2.5983 | 304 | 0.7007 | 0.6287 | 0.7007 | 0.8371 |
| No log | 2.6154 | 306 | 0.9145 | 0.6178 | 0.9145 | 0.9563 |
| No log | 2.6325 | 308 | 1.0440 | 0.5946 | 1.0440 | 1.0217 |
| No log | 2.6496 | 310 | 0.9763 | 0.5501 | 0.9763 | 0.9881 |
| No log | 2.6667 | 312 | 0.7762 | 0.5642 | 0.7762 | 0.8810 |
| No log | 2.6838 | 314 | 0.6499 | 0.6456 | 0.6499 | 0.8062 |
| No log | 2.7009 | 316 | 0.6238 | 0.6324 | 0.6238 | 0.7898 |
| No log | 2.7179 | 318 | 0.6225 | 0.6305 | 0.6225 | 0.7890 |
| No log | 2.7350 | 320 | 0.6774 | 0.5792 | 0.6774 | 0.8231 |
| No log | 2.7521 | 322 | 0.7064 | 0.5792 | 0.7064 | 0.8405 |
| No log | 2.7692 | 324 | 0.6769 | 0.5792 | 0.6769 | 0.8228 |
| No log | 2.7863 | 326 | 0.6304 | 0.6456 | 0.6304 | 0.7940 |
| No log | 2.8034 | 328 | 0.6338 | 0.6456 | 0.6338 | 0.7961 |
| No log | 2.8205 | 330 | 0.6604 | 0.6456 | 0.6604 | 0.8127 |
| No log | 2.8376 | 332 | 0.7276 | 0.5642 | 0.7276 | 0.8530 |
| No log | 2.8547 | 334 | 0.7325 | 0.5642 | 0.7325 | 0.8558 |
| No log | 2.8718 | 336 | 0.7351 | 0.5642 | 0.7351 | 0.8574 |
| No log | 2.8889 | 338 | 0.7463 | 0.5642 | 0.7463 | 0.8639 |
| No log | 2.9060 | 340 | 0.7062 | 0.5642 | 0.7062 | 0.8403 |
| No log | 2.9231 | 342 | 0.6687 | 0.5949 | 0.6687 | 0.8177 |
| No log | 2.9402 | 344 | 0.6660 | 0.6834 | 0.6660 | 0.8161 |
| No log | 2.9573 | 346 | 0.6593 | 0.7011 | 0.6593 | 0.8120 |
| No log | 2.9744 | 348 | 0.6775 | 0.5642 | 0.6775 | 0.8231 |
| No log | 2.9915 | 350 | 0.7223 | 0.5642 | 0.7223 | 0.8499 |
| No log | 3.0085 | 352 | 0.7364 | 0.5642 | 0.7364 | 0.8581 |
| No log | 3.0256 | 354 | 0.6907 | 0.5642 | 0.6907 | 0.8311 |
| No log | 3.0427 | 356 | 0.6183 | 0.6287 | 0.6183 | 0.7863 |
| No log | 3.0598 | 358 | 0.5988 | 0.6992 | 0.5988 | 0.7738 |
| No log | 3.0769 | 360 | 0.6048 | 0.7304 | 0.6048 | 0.7777 |
| No log | 3.0940 | 362 | 0.6595 | 0.5274 | 0.6595 | 0.8121 |
| No log | 3.1111 | 364 | 0.7767 | 0.5148 | 0.7767 | 0.8813 |
| No log | 3.1282 | 366 | 0.8392 | 0.5148 | 0.8392 | 0.9161 |
| No log | 3.1453 | 368 | 0.8545 | 0.5148 | 0.8545 | 0.9244 |
| No log | 3.1624 | 370 | 0.8216 | 0.5148 | 0.8216 | 0.9064 |
| No log | 3.1795 | 372 | 0.7755 | 0.5148 | 0.7755 | 0.8806 |
| No log | 3.1966 | 374 | 0.7515 | 0.5148 | 0.7515 | 0.8669 |
| No log | 3.2137 | 376 | 0.6881 | 0.5856 | 0.6881 | 0.8295 |
| No log | 3.2308 | 378 | 0.6791 | 0.6034 | 0.6791 | 0.8241 |
| No log | 3.2479 | 380 | 0.7122 | 0.5856 | 0.7122 | 0.8439 |
| No log | 3.2650 | 382 | 0.7911 | 0.5856 | 0.7911 | 0.8894 |
| No log | 3.2821 | 384 | 0.8658 | 0.5 | 0.8658 | 0.9305 |
| No log | 3.2991 | 386 | 0.9303 | 0.5599 | 0.9303 | 0.9645 |
| No log | 3.3162 | 388 | 0.9129 | 0.5599 | 0.9129 | 0.9554 |
| No log | 3.3333 | 390 | 0.8281 | 0.5687 | 0.8281 | 0.9100 |
| No log | 3.3504 | 392 | 0.7376 | 0.5856 | 0.7376 | 0.8589 |
| No log | 3.3675 | 394 | 0.6989 | 0.5856 | 0.6989 | 0.8360 |
| No log | 3.3846 | 396 | 0.6979 | 0.5856 | 0.6979 | 0.8354 |
| No log | 3.4017 | 398 | 0.7279 | 0.5856 | 0.7279 | 0.8532 |
| No log | 3.4188 | 400 | 0.7081 | 0.5856 | 0.7081 | 0.8415 |
| No log | 3.4359 | 402 | 0.6352 | 0.5929 | 0.6352 | 0.7970 |
| No log | 3.4530 | 404 | 0.6051 | 0.6456 | 0.6051 | 0.7779 |
| No log | 3.4701 | 406 | 0.5936 | 0.6305 | 0.5936 | 0.7705 |
| No log | 3.4872 | 408 | 0.6108 | 0.6456 | 0.6108 | 0.7815 |
| No log | 3.5043 | 410 | 0.6526 | 0.6287 | 0.6526 | 0.8078 |
| No log | 3.5214 | 412 | 0.6662 | 0.6287 | 0.6662 | 0.8162 |
| No log | 3.5385 | 414 | 0.6576 | 0.6287 | 0.6576 | 0.8109 |
| No log | 3.5556 | 416 | 0.6725 | 0.5929 | 0.6725 | 0.8201 |
| No log | 3.5726 | 418 | 0.6820 | 0.6445 | 0.6820 | 0.8258 |
| No log | 3.5897 | 420 | 0.6555 | 0.6287 | 0.6555 | 0.8096 |
| No log | 3.6068 | 422 | 0.6093 | 0.6287 | 0.6093 | 0.7806 |
| No log | 3.6239 | 424 | 0.5918 | 0.6474 | 0.5918 | 0.7693 |
| No log | 3.6410 | 426 | 0.5854 | 0.6474 | 0.5854 | 0.7651 |
| No log | 3.6581 | 428 | 0.6099 | 0.6287 | 0.6099 | 0.7810 |
| No log | 3.6752 | 430 | 0.6611 | 0.6287 | 0.6611 | 0.8131 |
| No log | 3.6923 | 432 | 0.7208 | 0.5929 | 0.7208 | 0.8490 |
| No log | 3.7094 | 434 | 0.7299 | 0.5929 | 0.7299 | 0.8544 |
| No log | 3.7265 | 436 | 0.6907 | 0.6287 | 0.6907 | 0.8311 |
| No log | 3.7436 | 438 | 0.6484 | 0.6456 | 0.6484 | 0.8052 |
| No log | 3.7607 | 440 | 0.6298 | 0.5990 | 0.6298 | 0.7936 |
| No log | 3.7778 | 442 | 0.6154 | 0.6324 | 0.6154 | 0.7845 |
| No log | 3.7949 | 444 | 0.6040 | 0.6495 | 0.6040 | 0.7772 |
| No log | 3.8120 | 446 | 0.6002 | 0.6950 | 0.6002 | 0.7748 |
| No log | 3.8291 | 448 | 0.5969 | 0.6950 | 0.5969 | 0.7726 |
| No log | 3.8462 | 450 | 0.6124 | 0.6652 | 0.6124 | 0.7826 |
| No log | 3.8632 | 452 | 0.6811 | 0.5642 | 0.6811 | 0.8253 |
| No log | 3.8803 | 454 | 0.7453 | 0.5642 | 0.7453 | 0.8633 |
| No log | 3.8974 | 456 | 0.7170 | 0.5642 | 0.7170 | 0.8467 |
| No log | 3.9145 | 458 | 0.6804 | 0.5856 | 0.6804 | 0.8249 |
| No log | 3.9316 | 460 | 0.6838 | 0.5856 | 0.6838 | 0.8269 |
| No log | 3.9487 | 462 | 0.7267 | 0.5856 | 0.7267 | 0.8525 |
| No log | 3.9658 | 464 | 0.8292 | 0.5148 | 0.8292 | 0.9106 |
| No log | 3.9829 | 466 | 0.9146 | 0.5148 | 0.9146 | 0.9563 |
| No log | 4.0 | 468 | 0.9248 | 0.5148 | 0.9248 | 0.9616 |
| No log | 4.0171 | 470 | 0.8197 | 0.5856 | 0.8197 | 0.9054 |
| No log | 4.0342 | 472 | 0.7383 | 0.5856 | 0.7383 | 0.8593 |
| No log | 4.0513 | 474 | 0.6746 | 0.52 | 0.6746 | 0.8213 |
| No log | 4.0684 | 476 | 0.6656 | 0.5924 | 0.6656 | 0.8159 |
| No log | 4.0855 | 478 | 0.6631 | 0.5756 | 0.6631 | 0.8143 |
| No log | 4.1026 | 480 | 0.6873 | 0.5929 | 0.6873 | 0.8290 |
| No log | 4.1197 | 482 | 0.7620 | 0.5929 | 0.7620 | 0.8729 |
| No log | 4.1368 | 484 | 0.8074 | 0.6445 | 0.8074 | 0.8986 |
| No log | 4.1538 | 486 | 0.7734 | 0.6445 | 0.7734 | 0.8794 |
| No log | 4.1709 | 488 | 0.6811 | 0.6287 | 0.6811 | 0.8253 |
| No log | 4.1880 | 490 | 0.6097 | 0.6287 | 0.6097 | 0.7808 |
| No log | 4.2051 | 492 | 0.5998 | 0.6287 | 0.5998 | 0.7745 |
| No log | 4.2222 | 494 | 0.6013 | 0.6287 | 0.6013 | 0.7754 |
| No log | 4.2393 | 496 | 0.6282 | 0.5929 | 0.6282 | 0.7926 |
| No log | 4.2564 | 498 | 0.6454 | 0.5929 | 0.6454 | 0.8034 |
| 0.4453 | 4.2735 | 500 | 0.6797 | 0.6445 | 0.6797 | 0.8244 |
| 0.4453 | 4.2906 | 502 | 0.7200 | 0.6445 | 0.7200 | 0.8485 |
| 0.4453 | 4.3077 | 504 | 0.7426 | 0.6419 | 0.7426 | 0.8618 |
| 0.4453 | 4.3248 | 506 | 0.7072 | 0.6419 | 0.7072 | 0.8409 |
| 0.4453 | 4.3419 | 508 | 0.7015 | 0.6419 | 0.7015 | 0.8376 |
| 0.4453 | 4.3590 | 510 | 0.6983 | 0.5856 | 0.6983 | 0.8356 |
| 0.4453 | 4.3761 | 512 | 0.7058 | 0.6419 | 0.7058 | 0.8401 |
| 0.4453 | 4.3932 | 514 | 0.7565 | 0.6419 | 0.7565 | 0.8698 |
| 0.4453 | 4.4103 | 516 | 0.8528 | 0.5952 | 0.8528 | 0.9235 |
| 0.4453 | 4.4274 | 518 | 0.8816 | 0.5205 | 0.8816 | 0.9390 |
| 0.4453 | 4.4444 | 520 | 0.8128 | 0.6419 | 0.8128 | 0.9016 |
| 0.4453 | 4.4615 | 522 | 0.7398 | 0.6419 | 0.7398 | 0.8601 |
| 0.4453 | 4.4786 | 524 | 0.6713 | 0.5856 | 0.6713 | 0.8193 |
| 0.4453 | 4.4957 | 526 | 0.6148 | 0.5813 | 0.6148 | 0.7841 |
| 0.4453 | 4.5128 | 528 | 0.6046 | 0.5813 | 0.6046 | 0.7776 |
| 0.4453 | 4.5299 | 530 | 0.6157 | 0.5813 | 0.6157 | 0.7847 |
| 0.4453 | 4.5470 | 532 | 0.6393 | 0.6462 | 0.6393 | 0.7995 |
| 0.4453 | 4.5641 | 534 | 0.6554 | 0.6462 | 0.6554 | 0.8095 |
| 0.4453 | 4.5812 | 536 | 0.6310 | 0.5949 | 0.6310 | 0.7943 |
| 0.4453 | 4.5983 | 538 | 0.6130 | 0.5949 | 0.6130 | 0.7829 |
| 0.4453 | 4.6154 | 540 | 0.6115 | 0.5949 | 0.6115 | 0.7820 |
| 0.4453 | 4.6325 | 542 | 0.6123 | 0.5949 | 0.6123 | 0.7825 |
| 0.4453 | 4.6496 | 544 | 0.6178 | 0.5949 | 0.6178 | 0.7860 |
| 0.4453 | 4.6667 | 546 | 0.6616 | 0.6305 | 0.6616 | 0.8134 |
| 0.4453 | 4.6838 | 548 | 0.7384 | 0.6445 | 0.7384 | 0.8593 |
| 0.4453 | 4.7009 | 550 | 0.7639 | 0.6896 | 0.7639 | 0.8740 |
| 0.4453 | 4.7179 | 552 | 0.7387 | 0.6753 | 0.7387 | 0.8595 |
| 0.4453 | 4.7350 | 554 | 0.6814 | 0.6305 | 0.6814 | 0.8254 |
| 0.4453 | 4.7521 | 556 | 0.6078 | 0.6305 | 0.6078 | 0.7796 |
| 0.4453 | 4.7692 | 558 | 0.5836 | 0.6495 | 0.5836 | 0.7639 |
| 0.4453 | 4.7863 | 560 | 0.5829 | 0.6163 | 0.5829 | 0.7635 |
| 0.4453 | 4.8034 | 562 | 0.6042 | 0.6145 | 0.6042 | 0.7773 |
| 0.4453 | 4.8205 | 564 | 0.6158 | 0.6622 | 0.6158 | 0.7847 |
| 0.4453 | 4.8376 | 566 | 0.6338 | 0.6305 | 0.6338 | 0.7961 |
| 0.4453 | 4.8547 | 568 | 0.6267 | 0.6462 | 0.6267 | 0.7916 |
| 0.4453 | 4.8718 | 570 | 0.6524 | 0.6305 | 0.6524 | 0.8077 |
| 0.4453 | 4.8889 | 572 | 0.6527 | 0.6305 | 0.6527 | 0.8079 |
| 0.4453 | 4.9060 | 574 | 0.6326 | 0.6305 | 0.6326 | 0.7953 |
| 0.4453 | 4.9231 | 576 | 0.6092 | 0.6163 | 0.6092 | 0.7805 |
| 0.4453 | 4.9402 | 578 | 0.6046 | 0.6305 | 0.6046 | 0.7775 |
| 0.4453 | 4.9573 | 580 | 0.6099 | 0.6305 | 0.6099 | 0.7809 |
| 0.4453 | 4.9744 | 582 | 0.6199 | 0.6305 | 0.6199 | 0.7874 |
| 0.4453 | 4.9915 | 584 | 0.6183 | 0.6163 | 0.6183 | 0.7863 |
| 0.4453 | 5.0085 | 586 | 0.6029 | 0.6659 | 0.6029 | 0.7765 |
| 0.4453 | 5.0256 | 588 | 0.6012 | 0.6659 | 0.6012 | 0.7754 |
| 0.4453 | 5.0427 | 590 | 0.6012 | 0.6339 | 0.6012 | 0.7754 |
| 0.4453 | 5.0598 | 592 | 0.6145 | 0.6145 | 0.6145 | 0.7839 |
| 0.4453 | 5.0769 | 594 | 0.6032 | 0.6643 | 0.6032 | 0.7766 |
| 0.4453 | 5.0940 | 596 | 0.6068 | 0.6814 | 0.6068 | 0.7790 |
| 0.4453 | 5.1111 | 598 | 0.6413 | 0.5792 | 0.6413 | 0.8008 |
| 0.4453 | 5.1282 | 600 | 0.6818 | 0.5710 | 0.6818 | 0.8257 |
| 0.4453 | 5.1453 | 602 | 0.6757 | 0.5710 | 0.6757 | 0.8220 |
| 0.4453 | 5.1624 | 604 | 0.6428 | 0.5879 | 0.6428 | 0.8017 |
| 0.4453 | 5.1795 | 606 | 0.6162 | 0.6448 | 0.6162 | 0.7850 |
| 0.4453 | 5.1966 | 608 | 0.6133 | 0.6448 | 0.6133 | 0.7831 |
| 0.4453 | 5.2137 | 610 | 0.6061 | 0.6448 | 0.6061 | 0.7786 |
| 0.4453 | 5.2308 | 612 | 0.6078 | 0.6056 | 0.6078 | 0.7796 |
| 0.4453 | 5.2479 | 614 | 0.6246 | 0.6056 | 0.6246 | 0.7903 |
| 0.4453 | 5.2650 | 616 | 0.6574 | 0.6034 | 0.6574 | 0.8108 |
| 0.4453 | 5.2821 | 618 | 0.7324 | 0.6419 | 0.7324 | 0.8558 |
| 0.4453 | 5.2991 | 620 | 0.8078 | 0.6119 | 0.8078 | 0.8988 |
| 0.4453 | 5.3162 | 622 | 0.8134 | 0.6119 | 0.8134 | 0.9019 |
| 0.4453 | 5.3333 | 624 | 0.7495 | 0.6419 | 0.7495 | 0.8657 |
| 0.4453 | 5.3504 | 626 | 0.6501 | 0.5856 | 0.6501 | 0.8063 |
| 0.4453 | 5.3675 | 628 | 0.6080 | 0.5949 | 0.6080 | 0.7797 |
| 0.4453 | 5.3846 | 630 | 0.5940 | 0.6305 | 0.5940 | 0.7707 |
| 0.4453 | 5.4017 | 632 | 0.5999 | 0.6145 | 0.5999 | 0.7745 |
| 0.4453 | 5.4188 | 634 | 0.6328 | 0.6287 | 0.6328 | 0.7955 |
| 0.4453 | 5.4359 | 636 | 0.6509 | 0.6287 | 0.6509 | 0.8068 |
| 0.4453 | 5.4530 | 638 | 0.6432 | 0.6287 | 0.6432 | 0.8020 |
| 0.4453 | 5.4701 | 640 | 0.6153 | 0.6287 | 0.6153 | 0.7844 |
| 0.4453 | 5.4872 | 642 | 0.5906 | 0.6287 | 0.5906 | 0.7685 |
| 0.4453 | 5.5043 | 644 | 0.5862 | 0.6287 | 0.5862 | 0.7656 |
| 0.4453 | 5.5214 | 646 | 0.6148 | 0.6287 | 0.6148 | 0.7841 |
| 0.4453 | 5.5385 | 648 | 0.6255 | 0.6287 | 0.6255 | 0.7909 |
| 0.4453 | 5.5556 | 650 | 0.6264 | 0.6287 | 0.6264 | 0.7915 |
| 0.4453 | 5.5726 | 652 | 0.5927 | 0.6287 | 0.5927 | 0.7699 |
| 0.4453 | 5.5897 | 654 | 0.5770 | 0.6287 | 0.5770 | 0.7596 |
| 0.4453 | 5.6068 | 656 | 0.5607 | 0.6287 | 0.5607 | 0.7488 |
| 0.4453 | 5.6239 | 658 | 0.5492 | 0.6643 | 0.5492 | 0.7411 |
| 0.4453 | 5.6410 | 660 | 0.5428 | 0.6643 | 0.5428 | 0.7368 |
| 0.4453 | 5.6581 | 662 | 0.5553 | 0.6479 | 0.5553 | 0.7452 |
| 0.4453 | 5.6752 | 664 | 0.5803 | 0.6287 | 0.5803 | 0.7618 |
| 0.4453 | 5.6923 | 666 | 0.5818 | 0.6287 | 0.5818 | 0.7628 |
| 0.4453 | 5.7094 | 668 | 0.5855 | 0.6767 | 0.5855 | 0.7652 |
| 0.4453 | 5.7265 | 670 | 0.6152 | 0.6767 | 0.6152 | 0.7843 |
| 0.4453 | 5.7436 | 672 | 0.6375 | 0.7186 | 0.6375 | 0.7984 |
| 0.4453 | 5.7607 | 674 | 0.6456 | 0.7186 | 0.6456 | 0.8035 |
| 0.4453 | 5.7778 | 676 | 0.6378 | 0.7186 | 0.6378 | 0.7986 |
| 0.4453 | 5.7949 | 678 | 0.5939 | 0.7186 | 0.5939 | 0.7706 |
| 0.4453 | 5.8120 | 680 | 0.5717 | 0.6622 | 0.5717 | 0.7561 |
| 0.4453 | 5.8291 | 682 | 0.5672 | 0.6622 | 0.5672 | 0.7531 |
| 0.4453 | 5.8462 | 684 | 0.5876 | 0.6767 | 0.5876 | 0.7666 |
| 0.4453 | 5.8632 | 686 | 0.6183 | 0.6767 | 0.6183 | 0.7863 |
| 0.4453 | 5.8803 | 688 | 0.6337 | 0.6767 | 0.6337 | 0.7960 |
| 0.4453 | 5.8974 | 690 | 0.6323 | 0.6767 | 0.6323 | 0.7952 |
| 0.4453 | 5.9145 | 692 | 0.6029 | 0.6287 | 0.6029 | 0.7765 |
| 0.4453 | 5.9316 | 694 | 0.5567 | 0.6287 | 0.5567 | 0.7461 |
| 0.4453 | 5.9487 | 696 | 0.5294 | 0.6456 | 0.5294 | 0.7276 |
| 0.4453 | 5.9658 | 698 | 0.5282 | 0.6632 | 0.5282 | 0.7267 |
| 0.4453 | 5.9829 | 700 | 0.5401 | 0.6456 | 0.5401 | 0.7349 |
| 0.4453 | 6.0 | 702 | 0.5535 | 0.6456 | 0.5535 | 0.7440 |
| 0.4453 | 6.0171 | 704 | 0.5738 | 0.6456 | 0.5738 | 0.7575 |
| 0.4453 | 6.0342 | 706 | 0.5954 | 0.6287 | 0.5954 | 0.7716 |
| 0.4453 | 6.0513 | 708 | 0.6450 | 0.6287 | 0.6450 | 0.8031 |
| 0.4453 | 6.0684 | 710 | 0.7339 | 0.6288 | 0.7339 | 0.8567 |
| 0.4453 | 6.0855 | 712 | 0.7898 | 0.6737 | 0.7898 | 0.8887 |
| 0.4453 | 6.1026 | 714 | 0.7855 | 0.6737 | 0.7855 | 0.8863 |
| 0.4453 | 6.1197 | 716 | 0.7338 | 0.6739 | 0.7338 | 0.8566 |
| 0.4453 | 6.1368 | 718 | 0.6803 | 0.6896 | 0.6803 | 0.8248 |
| 0.4453 | 6.1538 | 720 | 0.6509 | 0.6767 | 0.6509 | 0.8068 |
| 0.4453 | 6.1709 | 722 | 0.6179 | 0.6456 | 0.6179 | 0.7861 |
| 0.4453 | 6.1880 | 724 | 0.6190 | 0.6456 | 0.6190 | 0.7868 |
| 0.4453 | 6.2051 | 726 | 0.6488 | 0.6767 | 0.6488 | 0.8055 |
| 0.4453 | 6.2222 | 728 | 0.7031 | 0.6445 | 0.7031 | 0.8385 |
| 0.4453 | 6.2393 | 730 | 0.7514 | 0.5682 | 0.7514 | 0.8668 |
| 0.4453 | 6.2564 | 732 | 0.7722 | 0.5682 | 0.7722 | 0.8787 |
| 0.4453 | 6.2735 | 734 | 0.7386 | 0.6606 | 0.7386 | 0.8594 |
| 0.4453 | 6.2906 | 736 | 0.6685 | 0.6767 | 0.6685 | 0.8176 |
| 0.4453 | 6.3077 | 738 | 0.5912 | 0.6287 | 0.5912 | 0.7689 |
| 0.4453 | 6.3248 | 740 | 0.5513 | 0.6474 | 0.5513 | 0.7425 |
| 0.4453 | 6.3419 | 742 | 0.5453 | 0.6474 | 0.5453 | 0.7384 |
| 0.4453 | 6.3590 | 744 | 0.5566 | 0.6456 | 0.5566 | 0.7460 |
| 0.4453 | 6.3761 | 746 | 0.5691 | 0.6287 | 0.5691 | 0.7544 |
| 0.4453 | 6.3932 | 748 | 0.5719 | 0.6287 | 0.5719 | 0.7562 |
| 0.4453 | 6.4103 | 750 | 0.5678 | 0.6456 | 0.5678 | 0.7535 |
| 0.4453 | 6.4274 | 752 | 0.5716 | 0.6456 | 0.5716 | 0.7560 |
| 0.4453 | 6.4444 | 754 | 0.5577 | 0.6456 | 0.5577 | 0.7468 |
| 0.4453 | 6.4615 | 756 | 0.5519 | 0.6632 | 0.5519 | 0.7429 |
| 0.4453 | 6.4786 | 758 | 0.5416 | 0.6632 | 0.5416 | 0.7360 |
| 0.4453 | 6.4957 | 760 | 0.5287 | 0.6632 | 0.5287 | 0.7271 |
| 0.4453 | 6.5128 | 762 | 0.5144 | 0.6632 | 0.5144 | 0.7173 |
| 0.4453 | 6.5299 | 764 | 0.5098 | 0.6632 | 0.5098 | 0.7140 |
| 0.4453 | 6.5470 | 766 | 0.5152 | 0.6456 | 0.5152 | 0.7178 |
| 0.4453 | 6.5641 | 768 | 0.5212 | 0.6287 | 0.5212 | 0.7219 |
| 0.4453 | 6.5812 | 770 | 0.5127 | 0.6145 | 0.5127 | 0.7160 |
| 0.4453 | 6.5983 | 772 | 0.5102 | 0.6145 | 0.5102 | 0.7143 |
| 0.4453 | 6.6154 | 774 | 0.5162 | 0.6287 | 0.5162 | 0.7184 |
| 0.4453 | 6.6325 | 776 | 0.5262 | 0.6287 | 0.5262 | 0.7254 |
| 0.4453 | 6.6496 | 778 | 0.5244 | 0.6287 | 0.5244 | 0.7242 |
| 0.4453 | 6.6667 | 780 | 0.5195 | 0.6287 | 0.5195 | 0.7208 |
| 0.4453 | 6.6838 | 782 | 0.5138 | 0.6287 | 0.5138 | 0.7168 |
| 0.4453 | 6.7009 | 784 | 0.5019 | 0.6474 | 0.5019 | 0.7085 |
| 0.4453 | 6.7179 | 786 | 0.4975 | 0.7165 | 0.4975 | 0.7054 |
| 0.4453 | 6.7350 | 788 | 0.4997 | 0.7165 | 0.4997 | 0.7069 |
| 0.4453 | 6.7521 | 790 | 0.5023 | 0.7165 | 0.5023 | 0.7087 |
| 0.4453 | 6.7692 | 792 | 0.5121 | 0.6305 | 0.5121 | 0.7156 |
| 0.4453 | 6.7863 | 794 | 0.5254 | 0.6145 | 0.5254 | 0.7248 |
| 0.4453 | 6.8034 | 796 | 0.5486 | 0.6287 | 0.5486 | 0.7407 |
| 0.4453 | 6.8205 | 798 | 0.5607 | 0.6767 | 0.5607 | 0.7488 |
| 0.4453 | 6.8376 | 800 | 0.5474 | 0.6622 | 0.5474 | 0.7399 |
| 0.4453 | 6.8547 | 802 | 0.5419 | 0.6622 | 0.5419 | 0.7361 |
| 0.4453 | 6.8718 | 804 | 0.5254 | 0.6782 | 0.5254 | 0.7249 |
| 0.4453 | 6.8889 | 806 | 0.5186 | 0.6637 | 0.5186 | 0.7202 |
| 0.4453 | 6.9060 | 808 | 0.5064 | 0.6637 | 0.5064 | 0.7116 |
| 0.4453 | 6.9231 | 810 | 0.5001 | 0.6637 | 0.5001 | 0.7072 |
| 0.4453 | 6.9402 | 812 | 0.5012 | 0.6637 | 0.5012 | 0.7080 |
| 0.4453 | 6.9573 | 814 | 0.5135 | 0.6484 | 0.5135 | 0.7166 |
| 0.4453 | 6.9744 | 816 | 0.5255 | 0.6622 | 0.5255 | 0.7249 |
| 0.4453 | 6.9915 | 818 | 0.5394 | 0.6622 | 0.5394 | 0.7344 |
| 0.4453 | 7.0085 | 820 | 0.5334 | 0.6622 | 0.5334 | 0.7303 |
| 0.4453 | 7.0256 | 822 | 0.5171 | 0.6622 | 0.5171 | 0.7191 |
| 0.4453 | 7.0427 | 824 | 0.5098 | 0.6163 | 0.5098 | 0.7140 |
| 0.4453 | 7.0598 | 826 | 0.5055 | 0.6163 | 0.5055 | 0.7110 |
| 0.4453 | 7.0769 | 828 | 0.5080 | 0.6163 | 0.5080 | 0.7128 |
| 0.4453 | 7.0940 | 830 | 0.5008 | 0.6992 | 0.5008 | 0.7077 |
| 0.4453 | 7.1111 | 832 | 0.5013 | 0.6992 | 0.5013 | 0.7080 |
| 0.4453 | 7.1282 | 834 | 0.5158 | 0.6992 | 0.5158 | 0.7182 |
| 0.4453 | 7.1453 | 836 | 0.5365 | 0.6145 | 0.5365 | 0.7325 |
| 0.4453 | 7.1624 | 838 | 0.5465 | 0.6287 | 0.5465 | 0.7393 |
| 0.4453 | 7.1795 | 840 | 0.5420 | 0.6145 | 0.5420 | 0.7362 |
| 0.4453 | 7.1966 | 842 | 0.5377 | 0.6287 | 0.5377 | 0.7333 |
| 0.4453 | 7.2137 | 844 | 0.5471 | 0.6287 | 0.5471 | 0.7397 |
| 0.4453 | 7.2308 | 846 | 0.5480 | 0.6287 | 0.5480 | 0.7403 |
| 0.4453 | 7.2479 | 848 | 0.5369 | 0.6287 | 0.5369 | 0.7327 |
| 0.4453 | 7.2650 | 850 | 0.5199 | 0.6145 | 0.5199 | 0.7210 |
| 0.4453 | 7.2821 | 852 | 0.5141 | 0.6992 | 0.5141 | 0.7170 |
| 0.4453 | 7.2991 | 854 | 0.5186 | 0.6324 | 0.5186 | 0.7201 |
| 0.4453 | 7.3162 | 856 | 0.5283 | 0.6324 | 0.5283 | 0.7268 |
| 0.4453 | 7.3333 | 858 | 0.5431 | 0.6163 | 0.5431 | 0.7369 |
| 0.4453 | 7.3504 | 860 | 0.5709 | 0.6622 | 0.5709 | 0.7556 |
| 0.4453 | 7.3675 | 862 | 0.5987 | 0.6767 | 0.5987 | 0.7738 |
| 0.4453 | 7.3846 | 864 | 0.6142 | 0.6767 | 0.6142 | 0.7837 |
| 0.4453 | 7.4017 | 866 | 0.6112 | 0.6767 | 0.6112 | 0.7818 |
| 0.4453 | 7.4188 | 868 | 0.5842 | 0.6767 | 0.5842 | 0.7644 |
| 0.4453 | 7.4359 | 870 | 0.5700 | 0.6622 | 0.5700 | 0.7550 |
| 0.4453 | 7.4530 | 872 | 0.5618 | 0.6622 | 0.5618 | 0.7495 |
| 0.4453 | 7.4701 | 874 | 0.5750 | 0.6622 | 0.5750 | 0.7583 |
| 0.4453 | 7.4872 | 876 | 0.5799 | 0.6767 | 0.5799 | 0.7615 |
| 0.4453 | 7.5043 | 878 | 0.5889 | 0.6767 | 0.5889 | 0.7674 |
| 0.4453 | 7.5214 | 880 | 0.5938 | 0.6767 | 0.5938 | 0.7706 |
| 0.4453 | 7.5385 | 882 | 0.5786 | 0.6767 | 0.5786 | 0.7606 |
| 0.4453 | 7.5556 | 884 | 0.5717 | 0.6622 | 0.5717 | 0.7561 |
| 0.4453 | 7.5726 | 886 | 0.5725 | 0.6622 | 0.5725 | 0.7566 |
| 0.4453 | 7.5897 | 888 | 0.5814 | 0.6767 | 0.5814 | 0.7625 |
| 0.4453 | 7.6068 | 890 | 0.5984 | 0.6767 | 0.5984 | 0.7736 |
| 0.4453 | 7.6239 | 892 | 0.5987 | 0.6767 | 0.5987 | 0.7737 |
| 0.4453 | 7.6410 | 894 | 0.5973 | 0.6767 | 0.5973 | 0.7728 |
| 0.4453 | 7.6581 | 896 | 0.6064 | 0.6767 | 0.6064 | 0.7787 |
| 0.4453 | 7.6752 | 898 | 0.6085 | 0.6767 | 0.6085 | 0.7801 |
| 0.4453 | 7.6923 | 900 | 0.6086 | 0.6934 | 0.6086 | 0.7801 |
| 0.4453 | 7.7094 | 902 | 0.6095 | 0.6934 | 0.6095 | 0.7807 |
| 0.4453 | 7.7265 | 904 | 0.6081 | 0.6934 | 0.6081 | 0.7798 |
| 0.4453 | 7.7436 | 906 | 0.6083 | 0.6934 | 0.6083 | 0.7800 |
| 0.4453 | 7.7607 | 908 | 0.6085 | 0.6934 | 0.6085 | 0.7801 |
| 0.4453 | 7.7778 | 910 | 0.6087 | 0.6767 | 0.6087 | 0.7802 |
| 0.4453 | 7.7949 | 912 | 0.6020 | 0.6767 | 0.6020 | 0.7759 |
| 0.4453 | 7.8120 | 914 | 0.5943 | 0.6767 | 0.5943 | 0.7709 |
| 0.4453 | 7.8291 | 916 | 0.5747 | 0.6767 | 0.5747 | 0.7581 |
| 0.4453 | 7.8462 | 918 | 0.5568 | 0.6622 | 0.5568 | 0.7462 |
| 0.4453 | 7.8632 | 920 | 0.5509 | 0.6622 | 0.5509 | 0.7422 |
| 0.4453 | 7.8803 | 922 | 0.5414 | 0.6145 | 0.5414 | 0.7358 |
| 0.4453 | 7.8974 | 924 | 0.5315 | 0.6305 | 0.5315 | 0.7290 |
| 0.4453 | 7.9145 | 926 | 0.5255 | 0.6305 | 0.5255 | 0.7249 |
| 0.4453 | 7.9316 | 928 | 0.5311 | 0.6305 | 0.5311 | 0.7288 |
| 0.4453 | 7.9487 | 930 | 0.5329 | 0.6305 | 0.5329 | 0.7300 |
| 0.4453 | 7.9658 | 932 | 0.5401 | 0.6782 | 0.5401 | 0.7349 |
| 0.4453 | 7.9829 | 934 | 0.5568 | 0.6767 | 0.5568 | 0.7462 |
| 0.4453 | 8.0 | 936 | 0.5858 | 0.6767 | 0.5858 | 0.7654 |
| 0.4453 | 8.0171 | 938 | 0.5986 | 0.6767 | 0.5986 | 0.7737 |
| 0.4453 | 8.0342 | 940 | 0.6130 | 0.6767 | 0.6130 | 0.7830 |
| 0.4453 | 8.0513 | 942 | 0.6113 | 0.6934 | 0.6113 | 0.7818 |
| 0.4453 | 8.0684 | 944 | 0.6167 | 0.6934 | 0.6167 | 0.7853 |
| 0.4453 | 8.0855 | 946 | 0.6241 | 0.6934 | 0.6241 | 0.7900 |
| 0.4453 | 8.1026 | 948 | 0.6287 | 0.6934 | 0.6287 | 0.7929 |
| 0.4453 | 8.1197 | 950 | 0.6395 | 0.6610 | 0.6395 | 0.7997 |
| 0.4453 | 8.1368 | 952 | 0.6537 | 0.6610 | 0.6537 | 0.8085 |
| 0.4453 | 8.1538 | 954 | 0.6651 | 0.6610 | 0.6651 | 0.8155 |
| 0.4453 | 8.1709 | 956 | 0.6654 | 0.6610 | 0.6654 | 0.8157 |
| 0.4453 | 8.1880 | 958 | 0.6692 | 0.6610 | 0.6692 | 0.8180 |
| 0.4453 | 8.2051 | 960 | 0.6686 | 0.6610 | 0.6686 | 0.8177 |
| 0.4453 | 8.2222 | 962 | 0.6588 | 0.6610 | 0.6588 | 0.8116 |
| 0.4453 | 8.2393 | 964 | 0.6348 | 0.6934 | 0.6348 | 0.7967 |
| 0.4453 | 8.2564 | 966 | 0.6101 | 0.6934 | 0.6101 | 0.7811 |
| 0.4453 | 8.2735 | 968 | 0.5951 | 0.6934 | 0.5951 | 0.7714 |
| 0.4453 | 8.2906 | 970 | 0.5861 | 0.6934 | 0.5861 | 0.7656 |
| 0.4453 | 8.3077 | 972 | 0.5742 | 0.6934 | 0.5742 | 0.7578 |
| 0.4453 | 8.3248 | 974 | 0.5582 | 0.6305 | 0.5582 | 0.7471 |
| 0.4453 | 8.3419 | 976 | 0.5451 | 0.6305 | 0.5451 | 0.7383 |
| 0.4453 | 8.3590 | 978 | 0.5406 | 0.6305 | 0.5406 | 0.7352 |
| 0.4453 | 8.3761 | 980 | 0.5460 | 0.6305 | 0.5460 | 0.7389 |
| 0.4453 | 8.3932 | 982 | 0.5550 | 0.6767 | 0.5550 | 0.7450 |
| 0.4453 | 8.4103 | 984 | 0.5730 | 0.6767 | 0.5730 | 0.7570 |
| 0.4453 | 8.4274 | 986 | 0.5960 | 0.6767 | 0.5960 | 0.7720 |
| 0.4453 | 8.4444 | 988 | 0.6000 | 0.6767 | 0.6000 | 0.7746 |
| 0.4453 | 8.4615 | 990 | 0.5935 | 0.6767 | 0.5935 | 0.7704 |
| 0.4453 | 8.4786 | 992 | 0.5767 | 0.6622 | 0.5767 | 0.7594 |
| 0.4453 | 8.4957 | 994 | 0.5698 | 0.6622 | 0.5698 | 0.7549 |
| 0.4453 | 8.5128 | 996 | 0.5716 | 0.6622 | 0.5716 | 0.7561 |
| 0.4453 | 8.5299 | 998 | 0.5736 | 0.6622 | 0.5736 | 0.7573 |
| 0.0772 | 8.5470 | 1000 | 0.5776 | 0.6622 | 0.5776 | 0.7600 |
| 0.0772 | 8.5641 | 1002 | 0.5802 | 0.6622 | 0.5802 | 0.7617 |
| 0.0772 | 8.5812 | 1004 | 0.5872 | 0.6622 | 0.5872 | 0.7663 |
| 0.0772 | 8.5983 | 1006 | 0.5814 | 0.6622 | 0.5814 | 0.7625 |
| 0.0772 | 8.6154 | 1008 | 0.5744 | 0.6622 | 0.5744 | 0.7579 |
| 0.0772 | 8.6325 | 1010 | 0.5710 | 0.6622 | 0.5710 | 0.7557 |
| 0.0772 | 8.6496 | 1012 | 0.5645 | 0.6622 | 0.5645 | 0.7513 |
| 0.0772 | 8.6667 | 1014 | 0.5543 | 0.6782 | 0.5543 | 0.7445 |
| 0.0772 | 8.6838 | 1016 | 0.5480 | 0.6782 | 0.5480 | 0.7402 |
| 0.0772 | 8.7009 | 1018 | 0.5438 | 0.6782 | 0.5438 | 0.7374 |
| 0.0772 | 8.7179 | 1020 | 0.5450 | 0.6782 | 0.5450 | 0.7382 |
| 0.0772 | 8.7350 | 1022 | 0.5528 | 0.6782 | 0.5528 | 0.7435 |
| 0.0772 | 8.7521 | 1024 | 0.5656 | 0.6782 | 0.5656 | 0.7521 |
| 0.0772 | 8.7692 | 1026 | 0.5845 | 0.6934 | 0.5845 | 0.7645 |
| 0.0772 | 8.7863 | 1028 | 0.6012 | 0.6767 | 0.6012 | 0.7753 |
| 0.0772 | 8.8034 | 1030 | 0.6185 | 0.6767 | 0.6185 | 0.7865 |
| 0.0772 | 8.8205 | 1032 | 0.6223 | 0.7186 | 0.6223 | 0.7888 |
| 0.0772 | 8.8376 | 1034 | 0.6295 | 0.7186 | 0.6295 | 0.7934 |
| 0.0772 | 8.8547 | 1036 | 0.6419 | 0.7186 | 0.6419 | 0.8012 |
| 0.0772 | 8.8718 | 1038 | 0.6389 | 0.7186 | 0.6389 | 0.7993 |
| 0.0772 | 8.8889 | 1040 | 0.6326 | 0.7186 | 0.6326 | 0.7954 |
| 0.0772 | 8.9060 | 1042 | 0.6224 | 0.7186 | 0.6224 | 0.7889 |
| 0.0772 | 8.9231 | 1044 | 0.6100 | 0.7351 | 0.6100 | 0.7810 |
| 0.0772 | 8.9402 | 1046 | 0.5999 | 0.6782 | 0.5999 | 0.7746 |
| 0.0772 | 8.9573 | 1048 | 0.5899 | 0.6782 | 0.5899 | 0.7680 |
| 0.0772 | 8.9744 | 1050 | 0.5862 | 0.6782 | 0.5862 | 0.7657 |
| 0.0772 | 8.9915 | 1052 | 0.5844 | 0.6782 | 0.5844 | 0.7645 |
| 0.0772 | 9.0085 | 1054 | 0.5870 | 0.6782 | 0.5870 | 0.7662 |
| 0.0772 | 9.0256 | 1056 | 0.5892 | 0.6782 | 0.5892 | 0.7676 |
| 0.0772 | 9.0427 | 1058 | 0.5897 | 0.6782 | 0.5897 | 0.7679 |
| 0.0772 | 9.0598 | 1060 | 0.5897 | 0.6622 | 0.5897 | 0.7679 |
| 0.0772 | 9.0769 | 1062 | 0.5870 | 0.6622 | 0.5870 | 0.7661 |
| 0.0772 | 9.0940 | 1064 | 0.5829 | 0.6622 | 0.5829 | 0.7635 |
| 0.0772 | 9.1111 | 1066 | 0.5790 | 0.6622 | 0.5790 | 0.7609 |
| 0.0772 | 9.1282 | 1068 | 0.5796 | 0.6622 | 0.5796 | 0.7613 |
| 0.0772 | 9.1453 | 1070 | 0.5858 | 0.6622 | 0.5858 | 0.7654 |
| 0.0772 | 9.1624 | 1072 | 0.5928 | 0.6622 | 0.5928 | 0.7699 |
| 0.0772 | 9.1795 | 1074 | 0.6022 | 0.6767 | 0.6022 | 0.7760 |
| 0.0772 | 9.1966 | 1076 | 0.6059 | 0.6767 | 0.6059 | 0.7784 |
| 0.0772 | 9.2137 | 1078 | 0.6116 | 0.6767 | 0.6116 | 0.7820 |
| 0.0772 | 9.2308 | 1080 | 0.6176 | 0.6767 | 0.6176 | 0.7859 |
| 0.0772 | 9.2479 | 1082 | 0.6220 | 0.6767 | 0.6220 | 0.7887 |
| 0.0772 | 9.2650 | 1084 | 0.6218 | 0.6767 | 0.6218 | 0.7886 |
| 0.0772 | 9.2821 | 1086 | 0.6234 | 0.6767 | 0.6234 | 0.7895 |
| 0.0772 | 9.2991 | 1088 | 0.6203 | 0.6767 | 0.6203 | 0.7876 |
| 0.0772 | 9.3162 | 1090 | 0.6172 | 0.6767 | 0.6172 | 0.7856 |
| 0.0772 | 9.3333 | 1092 | 0.6129 | 0.6767 | 0.6129 | 0.7829 |
| 0.0772 | 9.3504 | 1094 | 0.6060 | 0.6767 | 0.6060 | 0.7785 |
| 0.0772 | 9.3675 | 1096 | 0.5976 | 0.6767 | 0.5976 | 0.7730 |
| 0.0772 | 9.3846 | 1098 | 0.5885 | 0.6767 | 0.5885 | 0.7672 |
| 0.0772 | 9.4017 | 1100 | 0.5829 | 0.6767 | 0.5829 | 0.7635 |
| 0.0772 | 9.4188 | 1102 | 0.5814 | 0.6622 | 0.5814 | 0.7625 |
| 0.0772 | 9.4359 | 1104 | 0.5839 | 0.6767 | 0.5839 | 0.7641 |
| 0.0772 | 9.4530 | 1106 | 0.5872 | 0.6767 | 0.5872 | 0.7663 |
| 0.0772 | 9.4701 | 1108 | 0.5906 | 0.6767 | 0.5906 | 0.7685 |
| 0.0772 | 9.4872 | 1110 | 0.5946 | 0.6767 | 0.5946 | 0.7711 |
| 0.0772 | 9.5043 | 1112 | 0.5963 | 0.6767 | 0.5963 | 0.7722 |
| 0.0772 | 9.5214 | 1114 | 0.5995 | 0.6767 | 0.5995 | 0.7743 |
| 0.0772 | 9.5385 | 1116 | 0.6012 | 0.6767 | 0.6012 | 0.7754 |
| 0.0772 | 9.5556 | 1118 | 0.6054 | 0.6767 | 0.6054 | 0.7780 |
| 0.0772 | 9.5726 | 1120 | 0.6082 | 0.6767 | 0.6082 | 0.7799 |
| 0.0772 | 9.5897 | 1122 | 0.6087 | 0.6767 | 0.6087 | 0.7802 |
| 0.0772 | 9.6068 | 1124 | 0.6074 | 0.6767 | 0.6074 | 0.7793 |
| 0.0772 | 9.6239 | 1126 | 0.6058 | 0.6767 | 0.6058 | 0.7783 |
| 0.0772 | 9.6410 | 1128 | 0.6037 | 0.6767 | 0.6037 | 0.7770 |
| 0.0772 | 9.6581 | 1130 | 0.5996 | 0.6767 | 0.5996 | 0.7743 |
| 0.0772 | 9.6752 | 1132 | 0.5953 | 0.6767 | 0.5953 | 0.7716 |
| 0.0772 | 9.6923 | 1134 | 0.5931 | 0.6767 | 0.5931 | 0.7701 |
| 0.0772 | 9.7094 | 1136 | 0.5924 | 0.6767 | 0.5924 | 0.7697 |
| 0.0772 | 9.7265 | 1138 | 0.5917 | 0.6767 | 0.5917 | 0.7692 |
| 0.0772 | 9.7436 | 1140 | 0.5919 | 0.6767 | 0.5919 | 0.7693 |
| 0.0772 | 9.7607 | 1142 | 0.5927 | 0.6767 | 0.5927 | 0.7699 |
| 0.0772 | 9.7778 | 1144 | 0.5948 | 0.6767 | 0.5948 | 0.7712 |
| 0.0772 | 9.7949 | 1146 | 0.5968 | 0.6767 | 0.5968 | 0.7726 |
| 0.0772 | 9.8120 | 1148 | 0.5988 | 0.6767 | 0.5988 | 0.7738 |
| 0.0772 | 9.8291 | 1150 | 0.6001 | 0.6767 | 0.6001 | 0.7747 |
| 0.0772 | 9.8462 | 1152 | 0.6008 | 0.6767 | 0.6008 | 0.7751 |
| 0.0772 | 9.8632 | 1154 | 0.6009 | 0.6767 | 0.6009 | 0.7752 |
| 0.0772 | 9.8803 | 1156 | 0.6014 | 0.6767 | 0.6014 | 0.7755 |
| 0.0772 | 9.8974 | 1158 | 0.6021 | 0.6767 | 0.6021 | 0.7759 |
| 0.0772 | 9.9145 | 1160 | 0.6023 | 0.6767 | 0.6023 | 0.7761 |
| 0.0772 | 9.9316 | 1162 | 0.6029 | 0.6767 | 0.6029 | 0.7765 |
| 0.0772 | 9.9487 | 1164 | 0.6032 | 0.6767 | 0.6032 | 0.7766 |
| 0.0772 | 9.9658 | 1166 | 0.6032 | 0.6767 | 0.6032 | 0.7767 |
| 0.0772 | 9.9829 | 1168 | 0.6033 | 0.6767 | 0.6033 | 0.7767 |
| 0.0772 | 10.0 | 1170 | 0.6034 | 0.6767 | 0.6034 | 0.7768 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q6_K-GGUF | Triangle104 | 2024-11-25T16:30:26Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2",
"base_model:quantized:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-18T14:24:54Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2`](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) for more details on the model.
---
Model details:
-
This is an uncensored version of Qwen2.5-14B-Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
Important Note This version is an improvement over the previous one Qwen2.5-14B-Instruct-abliterated.
Usage
You can use this model in your applications by loading it with Hugging Face's transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q6_K-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q6_k.gguf -c 2048
```
|
MTSAIR/Cotype-Nano-4bit | MTSAIR | 2024-11-25T16:26:44Z | 138 | 11 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ru",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-11-22T15:47:38Z | ---
library_name: transformers
language:
- ru
- en
pipeline_tag: text-generation
license: other
license_name: apache-2.0
license_link: https://huggingface.co/MTSAIR/Cotype-Nano-4bit/blob/main/Apache%20License%20MTS%20AI.docx
---
# Cotype-Nano-4bitπ€
Cotype-Nano-4bit β ΡΡΠΎ LLM, ΠΊΠΎΡΠΎΡΠ°Ρ Π²Π΅ΡΠΈΡ ΠΌΠ΅Π½ΡΡΠ΅ ΠΈ ΡΠ°Π±ΠΎΡΠ°Π΅Ρ Π±ΡΡΡΡΠ΅Π΅ ΡΠ΅ΠΌ Cotype-Nano, ΠΏΡΠΈ ΡΡΠΎΠΌ ΡΠΎΡ
ΡΠ°Π½ΡΡ ΡΠ²ΠΎΠΈ ΡΠ·ΡΠΊΠΎΠ²ΡΠ΅ ΡΠΏΠΎΡΠΎΠ±Π½ΠΎΡΡΠΈ
Cotype-Nano-4bit is an LLM that is lighter and runs faster than Cotype-Nano, while maintaining its language capabilities.
### Inference with vLLM
```
python3 -m vllm.entrypoints.openai.api_server --model MTSAIR/Cotype-Nano-4bit --port 8000
```
### Recommended generation parameters and system prompt
```python
import openai
import pandas as pd
from tqdm import tqdm
openai.api_key = 'xxx'
endpoint = 'http://localhost:8000/v1'
model = 'MTSAIR/Cotype-Nano-4bit'
openai.api_base = endpoint
response = openai.ChatCompletion.create(
model=model,
temperature=0.0, # ΡΠ°ΠΊΠΆΠ΅ Π΄ΠΎΠΏΡΡΠΊΠ°Π΅ΡΡΡ 0.2
frequency_penalty=0.0,
max_tokens=2048,
top_p=0.1, # ΡΠ°ΠΊΠΆΠ΅ Π΄ΠΎΠΏΡΡΠΊΠ°Π΅ΡΡΡ 0.9
messages=[
{"role": "system", "content": "Π’Ρ β ΠΠ-ΠΏΠΎΠΌΠΎΡΠ½ΠΈΠΊ. Π’Π΅Π±Π΅ Π΄Π°Π½ΠΎ Π·Π°Π΄Π°Π½ΠΈΠ΅: Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΡΠ³Π΅Π½Π΅ΡΠΈΡΠΎΠ²Π°ΡΡ ΠΏΠΎΠ΄ΡΠΎΠ±Π½ΡΠΉ ΠΈ ΡΠ°Π·Π²Π΅ΡΠ½ΡΡΡΠΉ ΠΎΡΠ²Π΅Ρ."},
{"role": "user", "content": "Π Π°ΡΡΠΊΠ°ΠΆΠΈ ΠΌΠ½Π΅ ΠΏΡΠΎ ΠΠ, ΠΊΠΎΡΠΎΡΠΊΠΎ"}
]
)
answer = response["choices"][0]["message"]["content"]
print(answer)
```
### Inference with Huggingface
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="MTSAIR/Cotype-Nano-4bit", device="cuda")
messages = [
{"role": "system", "content": "Π’Ρ β ΠΠ-ΠΏΠΎΠΌΠΎΡΠ½ΠΈΠΊ. Π’Π΅Π±Π΅ Π΄Π°Π½ΠΎ Π·Π°Π΄Π°Π½ΠΈΠ΅: Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΡΠ³Π΅Π½Π΅ΡΠΈΡΠΎΠ²Π°ΡΡ ΠΏΠΎΠ΄ΡΠΎΠ±Π½ΡΠΉ ΠΈ ΡΠ°Π·Π²Π΅ΡΠ½ΡΡΡΠΉ ΠΎΡΠ²Π΅Ρ."},
{"role": "user", "content": "Π Π°ΡΡΠΊΠ°ΠΆΠΈ ΠΌΠ½Π΅ ΠΏΡΠΎ ΠΠ"},
]
res = pipe(messages, max_length=1024)
print(res[0]['generated_text'][1])
```
## Example request
**Request:**
#### Cotype-Nano-4bit response:
> **Π Π΅ΠΊΠ»Π°ΠΌΠ½ΡΠΉ ΡΠ΅ΠΊΡΡ Π΄Π»Ρ ΠΌΠ°Π³Π°Π·ΠΈΠ½Π° ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ½ΠΎΠΉ ΠΎΠ΄Π΅ΠΆΠ΄Ρ:**
>
> ---
>
> **ΠΠ°Π±ΠΎΡΡΡΠ΅ΡΡ ΠΎ ΠΠ΅ΠΌΠ»Π΅, ΠΎΠ΄Π΅Π²Π°ΠΉΡΠ΅ΡΡ Ρ ΡΠΌΠΎΠΌ!**
>
> ΠΡΠΈΠ³Π»Π°ΡΠ°Π΅ΠΌ Π²Π°Ρ Π² ΠΌΠΈΡ ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ½ΠΎΠΉ ΠΌΠΎΠ΄Ρ! ΠΡ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΠΌ Π²Π°ΠΌ Π½Π΅ ΡΠΎΠ»ΡΠΊΠΎ ΡΡΠΈΠ»ΡΠ½ΡΠ΅ ΠΈ ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅Π½Π½ΡΠ΅ Π²Π΅ΡΠΈ, Π½ΠΎ ΠΈ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΡΠ΄Π΅Π»Π°ΡΡ ΡΠ²ΠΎΠΉ Π²ΡΠ±ΠΎΡ Π² ΠΏΠΎΠ»ΡΠ·Ρ ΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΠ³ΠΎ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΠΈ ΡΠΎΡ
ΡΠ°Π½Π΅Π½ΠΈΡ ΠΏΡΠΈΡΠΎΠ΄Ρ.
>
> **ΠΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ½Π°Ρ ΠΎΠ΄Π΅ΠΆΠ΄Π° β Π²Π°Ρ Π²ΡΠ±ΠΎΡ!**
>
> 1. **Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π³Π°ΡΠΌΠΎΠ½ΠΈΠΈ Ρ ΠΏΡΠΈΡΠΎΠ΄ΠΎΠΉ.** ΠΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌ ΡΠΎΠ»ΡΠΊΠΎ ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈ ΡΠΈΡΡΡΠ΅ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π»Ρ, ΠΊΠΎΡΠΎΡΡΠ΅ Π½Π΅ Π½Π°Π½ΠΎΡΡΡ Π²ΡΠ΅Π΄Π° ΠΎΠΊΡΡΠΆΠ°ΡΡΠ΅ΠΉ ΡΡΠ΅Π΄Π΅.
> 2. **ΠΠΈΠ½ΠΈΠΌΠ°Π»ΠΈΠ·ΠΌ Π² ΡΡΠΈΠ»Π΅.** ΠΠ°ΡΠ° ΠΎΠ΄Π΅ΠΆΠ΄Π° β ΡΡΠΎ Π½Π΅ ΠΏΡΠΎΡΡΠΎ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΡ Π³Π°ΡΠ΄Π΅ΡΠΎΠ±Π°, ΡΡΠΎ ΡΡΠΈΠ»Ρ, ΠΊΠΎΡΠΎΡΡΠΉ ΠΏΠΎΠ΄ΡΠ΅ΡΠΊΠΈΠ²Π°Π΅Ρ Π²Π°ΡΡ ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡΠ°Π»ΡΠ½ΠΎΡΡΡ.
> 3. **ΠΡΠΎΠ΄ΠΎΠ»ΠΆΠΈΡΠ΅Π»ΡΠ½ΡΠΉ ΡΡΠΎΠΊ ΡΠ»ΡΠΆΠ±Ρ.** ΠΡ ΡΠΎΠ·Π΄Π°Π΅ΠΌ Π²Π΅ΡΠΈ, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΏΡΠΎΡΠ»ΡΠΆΠ°Ρ Π²Π°ΠΌ Π΄ΠΎΠ»Π³ΠΎ, ΡΡΠΎ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΈΠ·Π±Π΅ΠΆΠ°ΡΡ Π½Π΅Π½ΡΠΆΠ½ΠΎΠ³ΠΎ ΠΏΠΎΡΡΠ΅Π±Π»Π΅Π½ΠΈΡ ΠΈ ΠΎΡΡ
ΠΎΠ΄ΠΎΠ².
> 4. **ΠΠΎΠ΄Π΄Π΅ΡΠΆΠΊΠ° ΠΌΠ΅ΡΡΠ½ΡΡ
ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»Π΅ΠΉ.** ΠΡ Π²ΡΠ±ΠΈΡΠ°Π΅ΠΌ ΠΌΠ΅ΡΡΠ½ΡΡ
ΠΌΠ°ΡΡΠ΅ΡΠΎΠ² ΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»Π΅ΠΉ, ΡΡΠΎ ΡΠΏΠΎΡΠΎΠ±ΡΡΠ²ΡΠ΅Ρ ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΠ°Π±ΠΎΡΠΈΡ
ΠΌΠ΅ΡΡ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅Ρ ΠΌΠ΅ΡΡΠ½ΡΡ ΡΠΊΠΎΠ½ΠΎΠΌΠΈΠΊΡ.
> 5. **ΠΠΎΠ·Π²ΡΠ°ΡΠ°Π΅ΠΌ Π²Π΅ΡΠΈ Π² ΠΊΡΡΠ³ΠΎΠ²ΠΎΡΠΎΡ.** ΠΡ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΠΌ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΡ ΠΏΠΎ Π²ΠΎΠ·Π²ΡΠ°ΡΡ ΠΈ ΠΏΠ΅ΡΠ΅ΡΠ°Π±ΠΎΡΠΊΠ΅ ΠΎΠ΄Π΅ΠΆΠ΄Ρ, ΡΡΠΎΠ±Ρ Π²Ρ ΠΌΠΎΠ³Π»ΠΈ Π²Π½Π΅ΡΡΠΈ ΡΠ²ΠΎΠΉ Π²ΠΊΠ»Π°Π΄ Π² ΡΠΎΡ
ΡΠ°Π½Π΅Π½ΠΈΠ΅ ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΠΈ.
>
> **ΠΠ°Ρ Π²ΡΠ±ΠΎΡ β Π²Π°ΡΠ° ΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎΡΡΡ.**
>
> ΠΡΠ±ΠΈΡΠ°ΠΉΡΠ΅ ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ½ΡΡ ΠΎΠ΄Π΅ΠΆΠ΄Ρ, ΠΈ ΠΏΠΎΠΌΠ½ΠΈΡΠ΅, ΡΡΠΎ Π²Π°Ρ Π²ΡΠ±ΠΎΡ Π²Π»ΠΈΡΠ΅Ρ Π½Π° Π±ΡΠ΄ΡΡΠ΅Π΅ Π½Π°ΡΠ΅ΠΉ ΠΏΠ»Π°Π½Π΅ΡΡ. ΠΠ°Π²Π°ΠΉΡΠ΅ Π²ΠΌΠ΅ΡΡΠ΅ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π³Π°ΡΠΌΠΎΠ½ΠΈΡ ΠΌΠ΅ΠΆΠ΄Ρ ΡΠ΅Π»ΠΎΠ²Π΅ΠΊΠΎΠΌ, Π΅Π³ΠΎ ΠΏΠΎΡΡΠ΅Π±Π½ΠΎΡΡΡΠΌΠΈ ΠΈ ΠΏΡΠΈΡΠΎΠ΄ΠΎΠΉ.
>
> **ΠΡΠΈΡ
ΠΎΠ΄ΠΈΡΠ΅ Π² Π½Π°Ρ ΠΌΠ°Π³Π°Π·ΠΈΠ½, ΡΡΠΎΠ±Ρ ΡΡΠ°ΡΡ ΡΠ°ΡΡΡΡ ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΌΠΎΠ΄Ρ!**
>
> ---
>
>ΠΡ ΡΠ²Π΅ΡΠ΅Π½Ρ, ΡΡΠΎ Π²Ρ Π½Π°ΠΉΠ΄Π΅ΡΠ΅ Π² Π½Π°ΡΠ΅ΠΌ Π°ΡΡΠΎΡΡΠΈΠΌΠ΅Π½ΡΠ΅ Π½Π΅ ΡΠΎΠ»ΡΠΊΠΎ ΡΡΠΈΠ»ΡΠ½ΡΠ΅ Π²Π΅ΡΠΈ, Π½ΠΎ ΠΈ ΡΠ²Π΅ΡΠ΅Π½Π½ΠΎΡΡΡ Π² ΡΠΎΠΌ, ΡΡΠΎ Π²Π°Ρ Π²ΡΠ±ΠΎΡ Π² ΠΏΠΎΠ»ΡΠ·Ρ ΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΡΡΠΈ ΠΈ ΡΠΊΠΎΠ»ΠΎΠ³ΠΈΠΈ Π½Π΅ Π±ΡΠ΄Π΅Ρ Π»ΠΈΡΠ½ΠΈΠΌ. ΠΡΠΈΡ
ΠΎΠ΄ΠΈΡΠ΅ ΠΈ ΠΏΠΎΡΠΌΠΎΡΡΠΈΡΠ΅, ΠΊΠ°ΠΊ ΠΌΠΎΠΆΠ½ΠΎ ΡΠΎΡΠ΅ΡΠ°ΡΡ ΠΌΠΎΠ΄Ρ Ρ ΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎΡΡΡΡ!
### Results on ru-llm-arena: 22.5 (local measurement)
| Model | Score | 95% CI | Avg. #Tokens |
|---------------------------------------------|-------|-------------------------|---------------|
| **Cotype-Nano-4bit** | **22.5** | **+2.1 / -1.4** | **582** |
| storm-7b | 20.62 | +1.4 / -1.6 | 419.32 |
| neural-chat-7b-v3-3 | 19.04 | +1.8 / -1.5 | 927.21 |
| Vikhrmodels-Vikhr-Llama-3.2-1B-instruct | 19.04 | +1.2 / -1.5 | 958.63 |
| gigachat_lite | 17.2 | +1.5 / -1.5 | 276.81 |
| Vikhrmodels-Vikhr-Qwen-2.5-0.5b-Instruct | 16.5 | +1.5 / -1.7 | 583.5 |
| Qwen-Qwen2.5-1.5B-Instruct | 16.46 | +1.3 / -1.3 | 483.67 |
| Vikhrmodels-vikhr-qwen-1.5b-it | 13.19 | +1.3 / -1.1 | 2495.38 |
| meta-llama-Llama-3.2-1B-Instruct | 4.04 | +0.6 / -0.8 | 1240.53 |
| Qwen-Qwen2.5-0.5B-Instruct | 4.02 | +0.7 / -0.8 | 829.87 | |
mradermacher/Llama-3.2-3B-COT-i1-GGUF | mradermacher | 2024-11-25T16:26:38Z | 366 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.2",
"thought-chain",
"instruction-finetuning",
"en",
"base_model:ericflo/Llama-3.2-3B-COT",
"base_model:quantized:ericflo/Llama-3.2-3B-COT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-25T15:47:21Z | ---
base_model: ericflo/Llama-3.2-3B-COT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-3.2
- thought-chain
- instruction-finetuning
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ericflo/Llama-3.2-3B-COT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF/resolve/main/Llama-3.2-3B-COT.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jw-hf-test/jw-3B-002 | jw-hf-test | 2024-11-25T16:26:07Z | 198 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T16:24:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
briannlongzhao/chinese_painting_textual_inversion | briannlongzhao | 2024-11-25T16:26:05Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-07T00:50:56Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - briannlongzhao/chinese_painting_textual_inversion
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
mradermacher/Llama-3.2-3B-COT-GGUF | mradermacher | 2024-11-25T16:23:50Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.2",
"thought-chain",
"instruction-finetuning",
"en",
"base_model:ericflo/Llama-3.2-3B-COT",
"base_model:quantized:ericflo/Llama-3.2-3B-COT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T15:33:55Z | ---
base_model: ericflo/Llama-3.2-3B-COT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-3.2
- thought-chain
- instruction-finetuning
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/ericflo/Llama-3.2-3B-COT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-COT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q4_0_4_4.gguf) | Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-COT-GGUF/resolve/main/Llama-3.2-3B-COT.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
furrutiav/roberta_mixtral_nllfg_rubric_sst2_sentence_embd_perplexity | furrutiav | 2024-11-25T16:21:08Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-25T16:20:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k25_task5_organization_fold0 | MayBashendy | 2024-11-25T16:19:14Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T16:09:20Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k25_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k25_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5431
- Qwk: 0.7267
- Mse: 0.5431
- Rmse: 0.7369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0185 | 2 | 1.9923 | 0.0758 | 1.9923 | 1.4115 |
| No log | 0.0370 | 4 | 0.9063 | 0.2933 | 0.9063 | 0.9520 |
| No log | 0.0556 | 6 | 0.8910 | 0.2578 | 0.8910 | 0.9439 |
| No log | 0.0741 | 8 | 0.9601 | 0.3448 | 0.9601 | 0.9799 |
| No log | 0.0926 | 10 | 0.9153 | 0.1985 | 0.9153 | 0.9567 |
| No log | 0.1111 | 12 | 1.1084 | 0.1003 | 1.1084 | 1.0528 |
| No log | 0.1296 | 14 | 0.9851 | 0.3358 | 0.9851 | 0.9925 |
| No log | 0.1481 | 16 | 0.9259 | 0.2278 | 0.9259 | 0.9622 |
| No log | 0.1667 | 18 | 1.0820 | 0.1304 | 1.0820 | 1.0402 |
| No log | 0.1852 | 20 | 1.1433 | 0.1304 | 1.1433 | 1.0692 |
| No log | 0.2037 | 22 | 1.1494 | 0.1304 | 1.1494 | 1.0721 |
| No log | 0.2222 | 24 | 1.2463 | 0.4970 | 1.2463 | 1.1164 |
| No log | 0.2407 | 26 | 1.2877 | 0.4355 | 1.2877 | 1.1348 |
| No log | 0.2593 | 28 | 1.1758 | 0.2222 | 1.1758 | 1.0844 |
| No log | 0.2778 | 30 | 1.1531 | 0.1304 | 1.1531 | 1.0738 |
| No log | 0.2963 | 32 | 1.1866 | 0.1304 | 1.1866 | 1.0893 |
| No log | 0.3148 | 34 | 1.1585 | 0.1304 | 1.1585 | 1.0763 |
| No log | 0.3333 | 36 | 1.1030 | 0.1304 | 1.1030 | 1.0503 |
| No log | 0.3519 | 38 | 1.0650 | 0.1304 | 1.0650 | 1.0320 |
| No log | 0.3704 | 40 | 1.0345 | 0.1304 | 1.0345 | 1.0171 |
| No log | 0.3889 | 42 | 1.0224 | 0.1304 | 1.0224 | 1.0111 |
| No log | 0.4074 | 44 | 1.0254 | 0.1304 | 1.0254 | 1.0126 |
| No log | 0.4259 | 46 | 1.0111 | 0.1304 | 1.0111 | 1.0055 |
| No log | 0.4444 | 48 | 0.9729 | 0.1304 | 0.9729 | 0.9864 |
| No log | 0.4630 | 50 | 0.9464 | 0.1304 | 0.9464 | 0.9728 |
| No log | 0.4815 | 52 | 0.9424 | 0.1304 | 0.9424 | 0.9708 |
| No log | 0.5 | 54 | 0.9260 | 0.1304 | 0.9260 | 0.9623 |
| No log | 0.5185 | 56 | 0.9080 | 0.1304 | 0.9080 | 0.9529 |
| No log | 0.5370 | 58 | 0.8864 | 0.1304 | 0.8864 | 0.9415 |
| No log | 0.5556 | 60 | 0.8358 | 0.3050 | 0.8358 | 0.9142 |
| No log | 0.5741 | 62 | 0.8136 | 0.3050 | 0.8136 | 0.9020 |
| No log | 0.5926 | 64 | 0.7762 | 0.3050 | 0.7762 | 0.8810 |
| No log | 0.6111 | 66 | 0.7666 | 0.2391 | 0.7666 | 0.8756 |
| No log | 0.6296 | 68 | 0.7170 | 0.2832 | 0.7170 | 0.8468 |
| No log | 0.6481 | 70 | 0.6751 | 0.2832 | 0.6751 | 0.8216 |
| No log | 0.6667 | 72 | 0.6154 | 0.3208 | 0.6154 | 0.7845 |
| No log | 0.6852 | 74 | 0.5818 | 0.4096 | 0.5818 | 0.7627 |
| No log | 0.7037 | 76 | 0.5655 | 0.4946 | 0.5655 | 0.7520 |
| No log | 0.7222 | 78 | 0.5537 | 0.4096 | 0.5537 | 0.7441 |
| No log | 0.7407 | 80 | 0.5606 | 0.3657 | 0.5606 | 0.7487 |
| No log | 0.7593 | 82 | 0.6472 | 0.3972 | 0.6472 | 0.8045 |
| No log | 0.7778 | 84 | 0.7671 | 0.3841 | 0.7671 | 0.8758 |
| No log | 0.7963 | 86 | 0.7833 | 0.3841 | 0.7833 | 0.8850 |
| No log | 0.8148 | 88 | 0.7584 | 0.3841 | 0.7584 | 0.8709 |
| No log | 0.8333 | 90 | 0.7027 | 0.3548 | 0.7027 | 0.8383 |
| No log | 0.8519 | 92 | 0.7281 | 0.3548 | 0.7281 | 0.8533 |
| No log | 0.8704 | 94 | 0.7842 | 0.2905 | 0.7842 | 0.8856 |
| No log | 0.8889 | 96 | 0.7542 | 0.4231 | 0.7542 | 0.8685 |
| No log | 0.9074 | 98 | 0.8230 | 0.5729 | 0.8230 | 0.9072 |
| No log | 0.9259 | 100 | 0.9533 | 0.5139 | 0.9533 | 0.9764 |
| No log | 0.9444 | 102 | 1.0232 | 0.4582 | 1.0232 | 1.0115 |
| No log | 0.9630 | 104 | 1.0743 | 0.4130 | 1.0743 | 1.0365 |
| No log | 0.9815 | 106 | 1.0227 | 0.4655 | 1.0227 | 1.0113 |
| No log | 1.0 | 108 | 0.9380 | 0.4676 | 0.9380 | 0.9685 |
| No log | 1.0185 | 110 | 0.7411 | 0.5946 | 0.7411 | 0.8609 |
| No log | 1.0370 | 112 | 0.6964 | 0.4901 | 0.6964 | 0.8345 |
| No log | 1.0556 | 114 | 0.6990 | 0.4940 | 0.6990 | 0.8361 |
| No log | 1.0741 | 116 | 0.6899 | 0.3354 | 0.6899 | 0.8306 |
| No log | 1.0926 | 118 | 0.7119 | 0.4813 | 0.7119 | 0.8437 |
| No log | 1.1111 | 120 | 0.8203 | 0.5330 | 0.8203 | 0.9057 |
| No log | 1.1296 | 122 | 0.8125 | 0.5276 | 0.8125 | 0.9014 |
| No log | 1.1481 | 124 | 0.7418 | 0.5330 | 0.7418 | 0.8613 |
| No log | 1.1667 | 126 | 0.7611 | 0.5729 | 0.7611 | 0.8724 |
| No log | 1.1852 | 128 | 0.9341 | 0.4676 | 0.9341 | 0.9665 |
| No log | 1.2037 | 130 | 1.1338 | 0.4676 | 1.1338 | 1.0648 |
| No log | 1.2222 | 132 | 1.2227 | 0.3946 | 1.2227 | 1.1058 |
| No log | 1.2407 | 134 | 1.2400 | 0.3946 | 1.2400 | 1.1136 |
| No log | 1.2593 | 136 | 1.1491 | 0.4676 | 1.1491 | 1.0719 |
| No log | 1.2778 | 138 | 1.0369 | 0.4483 | 1.0369 | 1.0183 |
| No log | 1.2963 | 140 | 0.9086 | 0.4483 | 0.9086 | 0.9532 |
| No log | 1.3148 | 142 | 0.9423 | 0.4676 | 0.9423 | 0.9707 |
| No log | 1.3333 | 144 | 0.9357 | 0.4676 | 0.9357 | 0.9673 |
| No log | 1.3519 | 146 | 0.8002 | 0.5165 | 0.8002 | 0.8946 |
| No log | 1.3704 | 148 | 0.6537 | 0.6200 | 0.6537 | 0.8085 |
| No log | 1.3889 | 150 | 0.6345 | 0.6200 | 0.6345 | 0.7966 |
| No log | 1.4074 | 152 | 0.6985 | 0.6200 | 0.6985 | 0.8357 |
| No log | 1.4259 | 154 | 0.8518 | 0.5181 | 0.8518 | 0.9229 |
| No log | 1.4444 | 156 | 0.8783 | 0.5662 | 0.8783 | 0.9372 |
| No log | 1.4630 | 158 | 0.8082 | 0.5973 | 0.8082 | 0.8990 |
| No log | 1.4815 | 160 | 0.7793 | 0.6468 | 0.7793 | 0.8828 |
| No log | 1.5 | 162 | 0.8251 | 0.6222 | 0.8251 | 0.9084 |
| No log | 1.5185 | 164 | 0.9471 | 0.4469 | 0.9471 | 0.9732 |
| No log | 1.5370 | 166 | 1.0205 | 0.5117 | 1.0205 | 1.0102 |
| No log | 1.5556 | 168 | 0.8448 | 0.5662 | 0.8448 | 0.9191 |
| No log | 1.5741 | 170 | 0.6096 | 0.6306 | 0.6096 | 0.7808 |
| No log | 1.5926 | 172 | 0.5804 | 0.6597 | 0.5804 | 0.7619 |
| No log | 1.6111 | 174 | 0.6309 | 0.6306 | 0.6309 | 0.7943 |
| No log | 1.6296 | 176 | 0.8015 | 0.6721 | 0.8015 | 0.8953 |
| No log | 1.6481 | 178 | 0.8965 | 0.5662 | 0.8965 | 0.9468 |
| No log | 1.6667 | 180 | 0.7707 | 0.6977 | 0.7707 | 0.8779 |
| No log | 1.6852 | 182 | 0.6547 | 0.6491 | 0.6547 | 0.8091 |
| No log | 1.7037 | 184 | 0.6357 | 0.6491 | 0.6357 | 0.7973 |
| No log | 1.7222 | 186 | 0.6268 | 0.6491 | 0.6268 | 0.7917 |
| No log | 1.7407 | 188 | 0.6535 | 0.6491 | 0.6535 | 0.8084 |
| No log | 1.7593 | 190 | 0.6949 | 0.6610 | 0.6949 | 0.8336 |
| No log | 1.7778 | 192 | 0.6904 | 0.6875 | 0.6904 | 0.8309 |
| No log | 1.7963 | 194 | 0.6727 | 0.6875 | 0.6727 | 0.8202 |
| No log | 1.8148 | 196 | 0.7053 | 0.6875 | 0.7053 | 0.8398 |
| No log | 1.8333 | 198 | 0.7764 | 0.6610 | 0.7764 | 0.8811 |
| No log | 1.8519 | 200 | 0.7458 | 0.6610 | 0.7458 | 0.8636 |
| No log | 1.8704 | 202 | 0.6949 | 0.6610 | 0.6949 | 0.8336 |
| No log | 1.8889 | 204 | 0.6180 | 0.6875 | 0.6180 | 0.7861 |
| No log | 1.9074 | 206 | 0.6012 | 0.7267 | 0.6012 | 0.7754 |
| No log | 1.9259 | 208 | 0.5997 | 0.7267 | 0.5997 | 0.7744 |
| No log | 1.9444 | 210 | 0.6111 | 0.7267 | 0.6111 | 0.7817 |
| No log | 1.9630 | 212 | 0.6189 | 0.7267 | 0.6189 | 0.7867 |
| No log | 1.9815 | 214 | 0.6399 | 0.6875 | 0.6399 | 0.8000 |
| No log | 2.0 | 216 | 0.6977 | 0.6610 | 0.6977 | 0.8353 |
| No log | 2.0185 | 218 | 0.8200 | 0.6341 | 0.8200 | 0.9055 |
| No log | 2.0370 | 220 | 0.9273 | 0.5465 | 0.9273 | 0.9630 |
| No log | 2.0556 | 222 | 0.9850 | 0.5205 | 0.9850 | 0.9925 |
| No log | 2.0741 | 224 | 1.0705 | 0.4912 | 1.0705 | 1.0346 |
| No log | 2.0926 | 226 | 1.0222 | 0.5402 | 1.0222 | 1.0110 |
| No log | 2.1111 | 228 | 0.9135 | 0.5205 | 0.9135 | 0.9558 |
| No log | 2.1296 | 230 | 0.8471 | 0.5991 | 0.8471 | 0.9204 |
| No log | 2.1481 | 232 | 0.7860 | 0.6069 | 0.7860 | 0.8866 |
| No log | 2.1667 | 234 | 0.7967 | 0.6069 | 0.7967 | 0.8926 |
| No log | 2.1852 | 236 | 0.8680 | 0.6341 | 0.8680 | 0.9317 |
| No log | 2.2037 | 238 | 0.9025 | 0.6341 | 0.9025 | 0.9500 |
| No log | 2.2222 | 240 | 0.8540 | 0.6341 | 0.8540 | 0.9241 |
| No log | 2.2407 | 242 | 0.7437 | 0.6341 | 0.7437 | 0.8624 |
| No log | 2.2593 | 244 | 0.6469 | 0.6491 | 0.6469 | 0.8043 |
| No log | 2.2778 | 246 | 0.6494 | 0.5041 | 0.6494 | 0.8058 |
| No log | 2.2963 | 248 | 0.6706 | 0.4809 | 0.6706 | 0.8189 |
| No log | 2.3148 | 250 | 0.6641 | 0.4809 | 0.6641 | 0.8149 |
| No log | 2.3333 | 252 | 0.6428 | 0.4809 | 0.6428 | 0.8018 |
| No log | 2.3519 | 254 | 0.6061 | 0.7267 | 0.6061 | 0.7785 |
| No log | 2.3704 | 256 | 0.6260 | 0.7018 | 0.6260 | 0.7912 |
| No log | 2.3889 | 258 | 0.6140 | 0.7018 | 0.6140 | 0.7836 |
| No log | 2.4074 | 260 | 0.5840 | 0.7267 | 0.5840 | 0.7642 |
| No log | 2.4259 | 262 | 0.5905 | 0.7018 | 0.5905 | 0.7684 |
| No log | 2.4444 | 264 | 0.6593 | 0.7351 | 0.6593 | 0.8120 |
| No log | 2.4630 | 266 | 0.7959 | 0.6171 | 0.7959 | 0.8921 |
| No log | 2.4815 | 268 | 0.8814 | 0.5852 | 0.8814 | 0.9388 |
| No log | 2.5 | 270 | 0.9443 | 0.5852 | 0.9443 | 0.9717 |
| No log | 2.5185 | 272 | 1.0251 | 0.5852 | 1.0251 | 1.0124 |
| No log | 2.5370 | 274 | 1.0016 | 0.6258 | 1.0016 | 1.0008 |
| No log | 2.5556 | 276 | 0.9451 | 0.6258 | 0.9451 | 0.9722 |
| No log | 2.5741 | 278 | 0.8355 | 0.6809 | 0.8355 | 0.9141 |
| No log | 2.5926 | 280 | 0.8232 | 0.6809 | 0.8232 | 0.9073 |
| No log | 2.6111 | 282 | 0.8343 | 0.6488 | 0.8343 | 0.9134 |
| No log | 2.6296 | 284 | 0.7983 | 0.6715 | 0.7983 | 0.8935 |
| No log | 2.6481 | 286 | 0.7124 | 0.7588 | 0.7124 | 0.8441 |
| No log | 2.6667 | 288 | 0.6131 | 0.7588 | 0.6131 | 0.7830 |
| No log | 2.6852 | 290 | 0.5427 | 0.7511 | 0.5427 | 0.7367 |
| No log | 2.7037 | 292 | 0.5408 | 0.7419 | 0.5408 | 0.7354 |
| No log | 2.7222 | 294 | 0.5923 | 0.7419 | 0.5923 | 0.7696 |
| No log | 2.7407 | 296 | 0.6411 | 0.7186 | 0.6411 | 0.8007 |
| No log | 2.7593 | 298 | 0.7416 | 0.7379 | 0.7416 | 0.8612 |
| No log | 2.7778 | 300 | 0.7585 | 0.7379 | 0.7585 | 0.8709 |
| No log | 2.7963 | 302 | 0.6385 | 0.7077 | 0.6385 | 0.7990 |
| No log | 2.8148 | 304 | 0.5313 | 0.7186 | 0.5313 | 0.7289 |
| No log | 2.8333 | 306 | 0.4911 | 0.6854 | 0.4911 | 0.7008 |
| No log | 2.8519 | 308 | 0.5176 | 0.7186 | 0.5176 | 0.7195 |
| No log | 2.8704 | 310 | 0.5922 | 0.7186 | 0.5922 | 0.7696 |
| No log | 2.8889 | 312 | 0.6482 | 0.7186 | 0.6482 | 0.8051 |
| No log | 2.9074 | 314 | 0.7012 | 0.7186 | 0.7012 | 0.8374 |
| No log | 2.9259 | 316 | 0.6843 | 0.7267 | 0.6843 | 0.8272 |
| No log | 2.9444 | 318 | 0.5690 | 0.7267 | 0.5690 | 0.7543 |
| No log | 2.9630 | 320 | 0.4705 | 0.6854 | 0.4705 | 0.6859 |
| No log | 2.9815 | 322 | 0.4456 | 0.6963 | 0.4456 | 0.6675 |
| No log | 3.0 | 324 | 0.4433 | 0.7826 | 0.4433 | 0.6658 |
| No log | 3.0185 | 326 | 0.4563 | 0.8109 | 0.4563 | 0.6755 |
| No log | 3.0370 | 328 | 0.4688 | 0.7465 | 0.4688 | 0.6847 |
| No log | 3.0556 | 330 | 0.5550 | 0.7419 | 0.5550 | 0.7450 |
| No log | 3.0741 | 332 | 0.6241 | 0.7267 | 0.6241 | 0.7900 |
| No log | 3.0926 | 334 | 0.5939 | 0.7419 | 0.5939 | 0.7706 |
| No log | 3.1111 | 336 | 0.5722 | 0.7511 | 0.5722 | 0.7564 |
| No log | 3.1296 | 338 | 0.5861 | 0.7267 | 0.5861 | 0.7656 |
| No log | 3.1481 | 340 | 0.6048 | 0.7267 | 0.6048 | 0.7777 |
| No log | 3.1667 | 342 | 0.6298 | 0.7267 | 0.6298 | 0.7936 |
| No log | 3.1852 | 344 | 0.7273 | 0.7267 | 0.7273 | 0.8528 |
| No log | 3.2037 | 346 | 0.7824 | 0.7271 | 0.7824 | 0.8845 |
| No log | 3.2222 | 348 | 0.7969 | 0.7271 | 0.7969 | 0.8927 |
| No log | 3.2407 | 350 | 0.7635 | 0.7271 | 0.7635 | 0.8738 |
| No log | 3.2593 | 352 | 0.7172 | 0.7269 | 0.7172 | 0.8469 |
| No log | 3.2778 | 354 | 0.7213 | 0.7018 | 0.7213 | 0.8493 |
| No log | 3.2963 | 356 | 0.7053 | 0.7018 | 0.7053 | 0.8398 |
| No log | 3.3148 | 358 | 0.6747 | 0.7018 | 0.6747 | 0.8214 |
| No log | 3.3333 | 360 | 0.6658 | 0.7018 | 0.6658 | 0.8160 |
| No log | 3.3519 | 362 | 0.6681 | 0.7018 | 0.6681 | 0.8174 |
| No log | 3.3704 | 364 | 0.6068 | 0.7018 | 0.6068 | 0.7790 |
| No log | 3.3889 | 366 | 0.5700 | 0.7304 | 0.5700 | 0.7550 |
| No log | 3.4074 | 368 | 0.5779 | 0.7343 | 0.5779 | 0.7602 |
| No log | 3.4259 | 370 | 0.5928 | 0.7080 | 0.5928 | 0.7699 |
| No log | 3.4444 | 372 | 0.5966 | 0.7080 | 0.5966 | 0.7724 |
| No log | 3.4630 | 374 | 0.6141 | 0.7647 | 0.6141 | 0.7836 |
| No log | 3.4815 | 376 | 0.6613 | 0.7018 | 0.6613 | 0.8132 |
| No log | 3.5 | 378 | 0.6767 | 0.7018 | 0.6767 | 0.8226 |
| No log | 3.5185 | 380 | 0.6510 | 0.7018 | 0.6510 | 0.8068 |
| No log | 3.5370 | 382 | 0.5857 | 0.7018 | 0.5857 | 0.7653 |
| No log | 3.5556 | 384 | 0.5180 | 0.7018 | 0.5180 | 0.7197 |
| No log | 3.5741 | 386 | 0.4853 | 0.7531 | 0.4853 | 0.6966 |
| No log | 3.5926 | 388 | 0.4938 | 0.7136 | 0.4938 | 0.7027 |
| No log | 3.6111 | 390 | 0.5799 | 0.7018 | 0.5799 | 0.7615 |
| No log | 3.6296 | 392 | 0.6651 | 0.7018 | 0.6651 | 0.8155 |
| No log | 3.6481 | 394 | 0.7330 | 0.7018 | 0.7330 | 0.8562 |
| No log | 3.6667 | 396 | 0.7058 | 0.7018 | 0.7058 | 0.8401 |
| No log | 3.6852 | 398 | 0.6642 | 0.7018 | 0.6642 | 0.8150 |
| No log | 3.7037 | 400 | 0.6355 | 0.7267 | 0.6355 | 0.7972 |
| No log | 3.7222 | 402 | 0.6365 | 0.7267 | 0.6365 | 0.7978 |
| No log | 3.7407 | 404 | 0.6021 | 0.7267 | 0.6021 | 0.7760 |
| No log | 3.7593 | 406 | 0.5315 | 0.7419 | 0.5315 | 0.7291 |
| No log | 3.7778 | 408 | 0.4960 | 0.7216 | 0.4960 | 0.7043 |
| No log | 3.7963 | 410 | 0.4933 | 0.7465 | 0.4933 | 0.7024 |
| No log | 3.8148 | 412 | 0.5080 | 0.7545 | 0.5080 | 0.7127 |
| No log | 3.8333 | 414 | 0.5403 | 0.7419 | 0.5403 | 0.7351 |
| No log | 3.8519 | 416 | 0.6044 | 0.7267 | 0.6044 | 0.7774 |
| No log | 3.8704 | 418 | 0.6456 | 0.7267 | 0.6456 | 0.8035 |
| No log | 3.8889 | 420 | 0.6241 | 0.7267 | 0.6241 | 0.7900 |
| No log | 3.9074 | 422 | 0.5728 | 0.7267 | 0.5728 | 0.7568 |
| No log | 3.9259 | 424 | 0.5290 | 0.7267 | 0.5290 | 0.7273 |
| No log | 3.9444 | 426 | 0.5204 | 0.7267 | 0.5204 | 0.7214 |
| No log | 3.9630 | 428 | 0.5263 | 0.7018 | 0.5263 | 0.7254 |
| No log | 3.9815 | 430 | 0.5676 | 0.7018 | 0.5676 | 0.7534 |
| No log | 4.0 | 432 | 0.6368 | 0.6909 | 0.6368 | 0.7980 |
| No log | 4.0185 | 434 | 0.6328 | 0.6909 | 0.6328 | 0.7955 |
| No log | 4.0370 | 436 | 0.5862 | 0.6909 | 0.5862 | 0.7656 |
| No log | 4.0556 | 438 | 0.5039 | 0.7018 | 0.5039 | 0.7098 |
| No log | 4.0741 | 440 | 0.4337 | 0.7647 | 0.4337 | 0.6586 |
| No log | 4.0926 | 442 | 0.4224 | 0.7602 | 0.4224 | 0.6499 |
| No log | 4.1111 | 444 | 0.4559 | 0.7636 | 0.4559 | 0.6752 |
| No log | 4.1296 | 446 | 0.4593 | 0.7636 | 0.4593 | 0.6777 |
| No log | 4.1481 | 448 | 0.4501 | 0.75 | 0.4501 | 0.6709 |
| No log | 4.1667 | 450 | 0.4624 | 0.7897 | 0.4624 | 0.6800 |
| No log | 4.1852 | 452 | 0.5136 | 0.7511 | 0.5136 | 0.7166 |
| No log | 4.2037 | 454 | 0.5615 | 0.7267 | 0.5615 | 0.7493 |
| No log | 4.2222 | 456 | 0.5883 | 0.7267 | 0.5883 | 0.7670 |
| No log | 4.2407 | 458 | 0.5664 | 0.7511 | 0.5664 | 0.7526 |
| No log | 4.2593 | 460 | 0.5935 | 0.7511 | 0.5935 | 0.7704 |
| No log | 4.2778 | 462 | 0.6155 | 0.7267 | 0.6155 | 0.7845 |
| No log | 4.2963 | 464 | 0.6231 | 0.7267 | 0.6231 | 0.7894 |
| No log | 4.3148 | 466 | 0.6484 | 0.6909 | 0.6484 | 0.8052 |
| No log | 4.3333 | 468 | 0.6807 | 0.6909 | 0.6807 | 0.8250 |
| No log | 4.3519 | 470 | 0.6994 | 0.6909 | 0.6994 | 0.8363 |
| No log | 4.3704 | 472 | 0.6629 | 0.6909 | 0.6629 | 0.8142 |
| No log | 4.3889 | 474 | 0.6277 | 0.6909 | 0.6277 | 0.7923 |
| No log | 4.4074 | 476 | 0.5914 | 0.7267 | 0.5914 | 0.7690 |
| No log | 4.4259 | 478 | 0.5947 | 0.7149 | 0.5947 | 0.7711 |
| No log | 4.4444 | 480 | 0.5757 | 0.7267 | 0.5757 | 0.7588 |
| No log | 4.4630 | 482 | 0.5471 | 0.7267 | 0.5471 | 0.7396 |
| No log | 4.4815 | 484 | 0.5387 | 0.7267 | 0.5387 | 0.7340 |
| No log | 4.5 | 486 | 0.5287 | 0.7267 | 0.5287 | 0.7271 |
| No log | 4.5185 | 488 | 0.4734 | 0.7037 | 0.4734 | 0.6881 |
| No log | 4.5370 | 490 | 0.4334 | 0.7442 | 0.4334 | 0.6584 |
| No log | 4.5556 | 492 | 0.4309 | 0.7442 | 0.4309 | 0.6564 |
| No log | 4.5741 | 494 | 0.4650 | 0.7304 | 0.4650 | 0.6819 |
| No log | 4.5926 | 496 | 0.5100 | 0.7267 | 0.5100 | 0.7141 |
| No log | 4.6111 | 498 | 0.5774 | 0.7267 | 0.5774 | 0.7598 |
| 0.3612 | 4.6296 | 500 | 0.6143 | 0.7267 | 0.6143 | 0.7838 |
| 0.3612 | 4.6481 | 502 | 0.5963 | 0.7267 | 0.5963 | 0.7722 |
| 0.3612 | 4.6667 | 504 | 0.5837 | 0.7267 | 0.5837 | 0.7640 |
| 0.3612 | 4.6852 | 506 | 0.5419 | 0.7267 | 0.5419 | 0.7362 |
| 0.3612 | 4.7037 | 508 | 0.5203 | 0.7511 | 0.5203 | 0.7213 |
| 0.3612 | 4.7222 | 510 | 0.5376 | 0.7267 | 0.5376 | 0.7332 |
| 0.3612 | 4.7407 | 512 | 0.5589 | 0.7267 | 0.5589 | 0.7476 |
| 0.3612 | 4.7593 | 514 | 0.5857 | 0.7267 | 0.5857 | 0.7653 |
| 0.3612 | 4.7778 | 516 | 0.5742 | 0.7267 | 0.5742 | 0.7578 |
| 0.3612 | 4.7963 | 518 | 0.5565 | 0.7267 | 0.5565 | 0.7460 |
| 0.3612 | 4.8148 | 520 | 0.5191 | 0.7176 | 0.5191 | 0.7205 |
| 0.3612 | 4.8333 | 522 | 0.4910 | 0.7465 | 0.4910 | 0.7007 |
| 0.3612 | 4.8519 | 524 | 0.4922 | 0.7465 | 0.4922 | 0.7016 |
| 0.3612 | 4.8704 | 526 | 0.4951 | 0.7465 | 0.4951 | 0.7036 |
| 0.3612 | 4.8889 | 528 | 0.5057 | 0.7176 | 0.5057 | 0.7111 |
| 0.3612 | 4.9074 | 530 | 0.5410 | 0.7267 | 0.5410 | 0.7355 |
| 0.3612 | 4.9259 | 532 | 0.5603 | 0.7267 | 0.5603 | 0.7485 |
| 0.3612 | 4.9444 | 534 | 0.5625 | 0.7267 | 0.5625 | 0.7500 |
| 0.3612 | 4.9630 | 536 | 0.6082 | 0.7267 | 0.6082 | 0.7799 |
| 0.3612 | 4.9815 | 538 | 0.6694 | 0.6909 | 0.6694 | 0.8181 |
| 0.3612 | 5.0 | 540 | 0.7058 | 0.6909 | 0.7058 | 0.8401 |
| 0.3612 | 5.0185 | 542 | 0.7442 | 0.6909 | 0.7442 | 0.8627 |
| 0.3612 | 5.0370 | 544 | 0.7898 | 0.6488 | 0.7898 | 0.8887 |
| 0.3612 | 5.0556 | 546 | 0.8132 | 0.6488 | 0.8132 | 0.9018 |
| 0.3612 | 5.0741 | 548 | 0.7959 | 0.6488 | 0.7959 | 0.8921 |
| 0.3612 | 5.0926 | 550 | 0.7613 | 0.6488 | 0.7613 | 0.8725 |
| 0.3612 | 5.1111 | 552 | 0.6824 | 0.6909 | 0.6824 | 0.8261 |
| 0.3612 | 5.1296 | 554 | 0.6230 | 0.6909 | 0.6230 | 0.7893 |
| 0.3612 | 5.1481 | 556 | 0.5759 | 0.7267 | 0.5759 | 0.7589 |
| 0.3612 | 5.1667 | 558 | 0.5419 | 0.7267 | 0.5419 | 0.7361 |
| 0.3612 | 5.1852 | 560 | 0.5131 | 0.7267 | 0.5131 | 0.7163 |
| 0.3612 | 5.2037 | 562 | 0.5183 | 0.7267 | 0.5183 | 0.7199 |
| 0.3612 | 5.2222 | 564 | 0.5069 | 0.7304 | 0.5069 | 0.7120 |
| 0.3612 | 5.2407 | 566 | 0.4994 | 0.7712 | 0.4994 | 0.7067 |
| 0.3612 | 5.2593 | 568 | 0.5000 | 0.7712 | 0.5000 | 0.7071 |
| 0.3612 | 5.2778 | 570 | 0.5011 | 0.7712 | 0.5011 | 0.7079 |
| 0.3612 | 5.2963 | 572 | 0.5060 | 0.7712 | 0.5060 | 0.7113 |
| 0.3612 | 5.3148 | 574 | 0.5106 | 0.7712 | 0.5106 | 0.7146 |
| 0.3612 | 5.3333 | 576 | 0.5132 | 0.7712 | 0.5132 | 0.7164 |
| 0.3612 | 5.3519 | 578 | 0.5152 | 0.7712 | 0.5152 | 0.7178 |
| 0.3612 | 5.3704 | 580 | 0.5158 | 0.7712 | 0.5158 | 0.7182 |
| 0.3612 | 5.3889 | 582 | 0.5228 | 0.7712 | 0.5228 | 0.7230 |
| 0.3612 | 5.4074 | 584 | 0.5297 | 0.7216 | 0.5297 | 0.7278 |
| 0.3612 | 5.4259 | 586 | 0.5368 | 0.6963 | 0.5368 | 0.7327 |
| 0.3612 | 5.4444 | 588 | 0.5283 | 0.6963 | 0.5283 | 0.7268 |
| 0.3612 | 5.4630 | 590 | 0.5090 | 0.7712 | 0.5090 | 0.7135 |
| 0.3612 | 5.4815 | 592 | 0.4998 | 0.7163 | 0.4998 | 0.7070 |
| 0.3612 | 5.5 | 594 | 0.4902 | 0.7163 | 0.4902 | 0.7001 |
| 0.3612 | 5.5185 | 596 | 0.4758 | 0.7857 | 0.4758 | 0.6898 |
| 0.3612 | 5.5370 | 598 | 0.4672 | 0.7857 | 0.4672 | 0.6835 |
| 0.3612 | 5.5556 | 600 | 0.4522 | 0.7985 | 0.4522 | 0.6725 |
| 0.3612 | 5.5741 | 602 | 0.4414 | 0.7985 | 0.4414 | 0.6644 |
| 0.3612 | 5.5926 | 604 | 0.4773 | 0.7393 | 0.4773 | 0.6909 |
| 0.3612 | 5.6111 | 606 | 0.5057 | 0.7267 | 0.5057 | 0.7111 |
| 0.3612 | 5.6296 | 608 | 0.5414 | 0.7267 | 0.5414 | 0.7358 |
| 0.3612 | 5.6481 | 610 | 0.6106 | 0.7267 | 0.6106 | 0.7814 |
| 0.3612 | 5.6667 | 612 | 0.6572 | 0.6667 | 0.6572 | 0.8107 |
| 0.3612 | 5.6852 | 614 | 0.6516 | 0.6667 | 0.6516 | 0.8072 |
| 0.3612 | 5.7037 | 616 | 0.6110 | 0.6667 | 0.6110 | 0.7816 |
| 0.3612 | 5.7222 | 618 | 0.5559 | 0.6767 | 0.5559 | 0.7456 |
| 0.3612 | 5.7407 | 620 | 0.5238 | 0.6491 | 0.5238 | 0.7238 |
| 0.3612 | 5.7593 | 622 | 0.5219 | 0.7037 | 0.5219 | 0.7224 |
| 0.3612 | 5.7778 | 624 | 0.5290 | 0.7037 | 0.5290 | 0.7273 |
| 0.3612 | 5.7963 | 626 | 0.5263 | 0.7037 | 0.5263 | 0.7254 |
| 0.3612 | 5.8148 | 628 | 0.5101 | 0.7037 | 0.5101 | 0.7142 |
| 0.3612 | 5.8333 | 630 | 0.5139 | 0.7393 | 0.5139 | 0.7169 |
| 0.3612 | 5.8519 | 632 | 0.5491 | 0.7267 | 0.5491 | 0.7410 |
| 0.3612 | 5.8704 | 634 | 0.5871 | 0.7267 | 0.5871 | 0.7662 |
| 0.3612 | 5.8889 | 636 | 0.5843 | 0.7267 | 0.5843 | 0.7644 |
| 0.3612 | 5.9074 | 638 | 0.5706 | 0.7267 | 0.5706 | 0.7554 |
| 0.3612 | 5.9259 | 640 | 0.5543 | 0.7267 | 0.5543 | 0.7445 |
| 0.3612 | 5.9444 | 642 | 0.5103 | 0.7393 | 0.5103 | 0.7143 |
| 0.3612 | 5.9630 | 644 | 0.4849 | 0.7393 | 0.4849 | 0.6964 |
| 0.3612 | 5.9815 | 646 | 0.4947 | 0.7393 | 0.4947 | 0.7034 |
| 0.3612 | 6.0 | 648 | 0.4964 | 0.7647 | 0.4964 | 0.7046 |
| 0.3612 | 6.0185 | 650 | 0.4983 | 0.7647 | 0.4983 | 0.7059 |
| 0.3612 | 6.0370 | 652 | 0.5081 | 0.7647 | 0.5081 | 0.7128 |
| 0.3612 | 6.0556 | 654 | 0.5159 | 0.7647 | 0.5159 | 0.7183 |
| 0.3612 | 6.0741 | 656 | 0.5225 | 0.7647 | 0.5225 | 0.7228 |
| 0.3612 | 6.0926 | 658 | 0.5731 | 0.7267 | 0.5731 | 0.7570 |
| 0.3612 | 6.1111 | 660 | 0.6058 | 0.7267 | 0.6058 | 0.7783 |
| 0.3612 | 6.1296 | 662 | 0.6256 | 0.7267 | 0.6256 | 0.7910 |
| 0.3612 | 6.1481 | 664 | 0.6473 | 0.7267 | 0.6473 | 0.8046 |
| 0.3612 | 6.1667 | 666 | 0.6722 | 0.7267 | 0.6722 | 0.8199 |
| 0.3612 | 6.1852 | 668 | 0.7150 | 0.6909 | 0.7150 | 0.8456 |
| 0.3612 | 6.2037 | 670 | 0.7604 | 0.6909 | 0.7604 | 0.8720 |
| 0.3612 | 6.2222 | 672 | 0.7850 | 0.6715 | 0.7850 | 0.8860 |
| 0.3612 | 6.2407 | 674 | 0.7855 | 0.6939 | 0.7855 | 0.8863 |
| 0.3612 | 6.2593 | 676 | 0.7527 | 0.7149 | 0.7527 | 0.8676 |
| 0.3612 | 6.2778 | 678 | 0.6970 | 0.7149 | 0.6970 | 0.8349 |
| 0.3612 | 6.2963 | 680 | 0.6623 | 0.7267 | 0.6623 | 0.8138 |
| 0.3612 | 6.3148 | 682 | 0.6268 | 0.7267 | 0.6268 | 0.7917 |
| 0.3612 | 6.3333 | 684 | 0.5982 | 0.7267 | 0.5982 | 0.7734 |
| 0.3612 | 6.3519 | 686 | 0.5846 | 0.7267 | 0.5846 | 0.7646 |
| 0.3612 | 6.3704 | 688 | 0.5748 | 0.7267 | 0.5748 | 0.7582 |
| 0.3612 | 6.3889 | 690 | 0.5801 | 0.7267 | 0.5801 | 0.7616 |
| 0.3612 | 6.4074 | 692 | 0.5871 | 0.7267 | 0.5871 | 0.7663 |
| 0.3612 | 6.4259 | 694 | 0.5967 | 0.7267 | 0.5967 | 0.7725 |
| 0.3612 | 6.4444 | 696 | 0.6016 | 0.7267 | 0.6016 | 0.7756 |
| 0.3612 | 6.4630 | 698 | 0.6127 | 0.7267 | 0.6127 | 0.7828 |
| 0.3612 | 6.4815 | 700 | 0.6228 | 0.7267 | 0.6228 | 0.7892 |
| 0.3612 | 6.5 | 702 | 0.6258 | 0.7267 | 0.6258 | 0.7911 |
| 0.3612 | 6.5185 | 704 | 0.6340 | 0.7267 | 0.6340 | 0.7963 |
| 0.3612 | 6.5370 | 706 | 0.6607 | 0.7267 | 0.6607 | 0.8128 |
| 0.3612 | 6.5556 | 708 | 0.6839 | 0.7149 | 0.6839 | 0.8270 |
| 0.3612 | 6.5741 | 710 | 0.7257 | 0.6909 | 0.7257 | 0.8519 |
| 0.3612 | 6.5926 | 712 | 0.7846 | 0.6939 | 0.7846 | 0.8858 |
| 0.3612 | 6.6111 | 714 | 0.8158 | 0.6964 | 0.8158 | 0.9032 |
| 0.3612 | 6.6296 | 716 | 0.8150 | 0.6964 | 0.8150 | 0.9028 |
| 0.3612 | 6.6481 | 718 | 0.7901 | 0.6964 | 0.7901 | 0.8889 |
| 0.3612 | 6.6667 | 720 | 0.7571 | 0.6939 | 0.7571 | 0.8701 |
| 0.3612 | 6.6852 | 722 | 0.7032 | 0.7267 | 0.7032 | 0.8386 |
| 0.3612 | 6.7037 | 724 | 0.6414 | 0.6606 | 0.6414 | 0.8009 |
| 0.3612 | 6.7222 | 726 | 0.6026 | 0.6606 | 0.6026 | 0.7762 |
| 0.3612 | 6.7407 | 728 | 0.6019 | 0.6606 | 0.6019 | 0.7758 |
| 0.3612 | 6.7593 | 730 | 0.6243 | 0.7267 | 0.6243 | 0.7901 |
| 0.3612 | 6.7778 | 732 | 0.6687 | 0.7267 | 0.6687 | 0.8178 |
| 0.3612 | 6.7963 | 734 | 0.7230 | 0.7160 | 0.7230 | 0.8503 |
| 0.3612 | 6.8148 | 736 | 0.7518 | 0.7160 | 0.7518 | 0.8671 |
| 0.3612 | 6.8333 | 738 | 0.7497 | 0.7160 | 0.7497 | 0.8659 |
| 0.3612 | 6.8519 | 740 | 0.7193 | 0.7149 | 0.7193 | 0.8481 |
| 0.3612 | 6.8704 | 742 | 0.6748 | 0.7149 | 0.6748 | 0.8215 |
| 0.3612 | 6.8889 | 744 | 0.6270 | 0.7149 | 0.6270 | 0.7918 |
| 0.3612 | 6.9074 | 746 | 0.6004 | 0.7267 | 0.6004 | 0.7749 |
| 0.3612 | 6.9259 | 748 | 0.5866 | 0.7267 | 0.5866 | 0.7659 |
| 0.3612 | 6.9444 | 750 | 0.5831 | 0.7267 | 0.5831 | 0.7636 |
| 0.3612 | 6.9630 | 752 | 0.5796 | 0.7267 | 0.5796 | 0.7613 |
| 0.3612 | 6.9815 | 754 | 0.5875 | 0.7267 | 0.5875 | 0.7665 |
| 0.3612 | 7.0 | 756 | 0.5976 | 0.7267 | 0.5976 | 0.7731 |
| 0.3612 | 7.0185 | 758 | 0.6083 | 0.7267 | 0.6083 | 0.7800 |
| 0.3612 | 7.0370 | 760 | 0.6040 | 0.7267 | 0.6040 | 0.7772 |
| 0.3612 | 7.0556 | 762 | 0.5885 | 0.6854 | 0.5885 | 0.7672 |
| 0.3612 | 7.0741 | 764 | 0.5838 | 0.6495 | 0.5838 | 0.7641 |
| 0.3612 | 7.0926 | 766 | 0.5761 | 0.6594 | 0.5761 | 0.7590 |
| 0.3612 | 7.1111 | 768 | 0.5771 | 0.6594 | 0.5771 | 0.7597 |
| 0.3612 | 7.1296 | 770 | 0.5801 | 0.6495 | 0.5801 | 0.7616 |
| 0.3612 | 7.1481 | 772 | 0.5879 | 0.6495 | 0.5879 | 0.7667 |
| 0.3612 | 7.1667 | 774 | 0.5891 | 0.6495 | 0.5891 | 0.7676 |
| 0.3612 | 7.1852 | 776 | 0.5821 | 0.6495 | 0.5821 | 0.7630 |
| 0.3612 | 7.2037 | 778 | 0.5738 | 0.6860 | 0.5738 | 0.7575 |
| 0.3612 | 7.2222 | 780 | 0.5671 | 0.6860 | 0.5671 | 0.7530 |
| 0.3612 | 7.2407 | 782 | 0.5640 | 0.6860 | 0.5640 | 0.7510 |
| 0.3612 | 7.2593 | 784 | 0.5565 | 0.6860 | 0.5565 | 0.7460 |
| 0.3612 | 7.2778 | 786 | 0.5615 | 0.6860 | 0.5615 | 0.7493 |
| 0.3612 | 7.2963 | 788 | 0.5829 | 0.6235 | 0.5829 | 0.7635 |
| 0.3612 | 7.3148 | 790 | 0.6189 | 0.6606 | 0.6189 | 0.7867 |
| 0.3612 | 7.3333 | 792 | 0.6579 | 0.7267 | 0.6579 | 0.8111 |
| 0.3612 | 7.3519 | 794 | 0.6875 | 0.7149 | 0.6875 | 0.8291 |
| 0.3612 | 7.3704 | 796 | 0.7140 | 0.7149 | 0.7140 | 0.8450 |
| 0.3612 | 7.3889 | 798 | 0.7215 | 0.7149 | 0.7215 | 0.8494 |
| 0.3612 | 7.4074 | 800 | 0.7079 | 0.7149 | 0.7079 | 0.8414 |
| 0.3612 | 7.4259 | 802 | 0.6889 | 0.7149 | 0.6889 | 0.8300 |
| 0.3612 | 7.4444 | 804 | 0.6644 | 0.7267 | 0.6644 | 0.8151 |
| 0.3612 | 7.4630 | 806 | 0.6418 | 0.7267 | 0.6418 | 0.8011 |
| 0.3612 | 7.4815 | 808 | 0.6421 | 0.7267 | 0.6421 | 0.8013 |
| 0.3612 | 7.5 | 810 | 0.6396 | 0.7267 | 0.6396 | 0.7997 |
| 0.3612 | 7.5185 | 812 | 0.6456 | 0.7267 | 0.6456 | 0.8035 |
| 0.3612 | 7.5370 | 814 | 0.6588 | 0.7267 | 0.6588 | 0.8117 |
| 0.3612 | 7.5556 | 816 | 0.6756 | 0.7267 | 0.6756 | 0.8220 |
| 0.3612 | 7.5741 | 818 | 0.6997 | 0.7267 | 0.6997 | 0.8365 |
| 0.3612 | 7.5926 | 820 | 0.7158 | 0.7160 | 0.7158 | 0.8461 |
| 0.3612 | 7.6111 | 822 | 0.7264 | 0.7160 | 0.7264 | 0.8523 |
| 0.3612 | 7.6296 | 824 | 0.7158 | 0.7149 | 0.7158 | 0.8460 |
| 0.3612 | 7.6481 | 826 | 0.6939 | 0.7267 | 0.6939 | 0.8330 |
| 0.3612 | 7.6667 | 828 | 0.6555 | 0.7267 | 0.6555 | 0.8096 |
| 0.3612 | 7.6852 | 830 | 0.6405 | 0.7267 | 0.6405 | 0.8003 |
| 0.3612 | 7.7037 | 832 | 0.6288 | 0.7267 | 0.6288 | 0.7930 |
| 0.3612 | 7.7222 | 834 | 0.6225 | 0.7267 | 0.6225 | 0.7890 |
| 0.3612 | 7.7407 | 836 | 0.6230 | 0.7267 | 0.6230 | 0.7893 |
| 0.3612 | 7.7593 | 838 | 0.6183 | 0.7267 | 0.6183 | 0.7863 |
| 0.3612 | 7.7778 | 840 | 0.6096 | 0.7267 | 0.6096 | 0.7807 |
| 0.3612 | 7.7963 | 842 | 0.5950 | 0.7267 | 0.5950 | 0.7714 |
| 0.3612 | 7.8148 | 844 | 0.5767 | 0.7267 | 0.5767 | 0.7594 |
| 0.3612 | 7.8333 | 846 | 0.5588 | 0.7267 | 0.5588 | 0.7475 |
| 0.3612 | 7.8519 | 848 | 0.5255 | 0.7267 | 0.5255 | 0.7249 |
| 0.3612 | 7.8704 | 850 | 0.5075 | 0.7267 | 0.5075 | 0.7124 |
| 0.3612 | 7.8889 | 852 | 0.5028 | 0.7267 | 0.5028 | 0.7091 |
| 0.3612 | 7.9074 | 854 | 0.4876 | 0.7267 | 0.4876 | 0.6983 |
| 0.3612 | 7.9259 | 856 | 0.4835 | 0.7267 | 0.4835 | 0.6954 |
| 0.3612 | 7.9444 | 858 | 0.4882 | 0.7267 | 0.4882 | 0.6987 |
| 0.3612 | 7.9630 | 860 | 0.4989 | 0.7267 | 0.4989 | 0.7063 |
| 0.3612 | 7.9815 | 862 | 0.5147 | 0.7267 | 0.5147 | 0.7174 |
| 0.3612 | 8.0 | 864 | 0.5260 | 0.7267 | 0.5260 | 0.7253 |
| 0.3612 | 8.0185 | 866 | 0.5380 | 0.7267 | 0.5380 | 0.7335 |
| 0.3612 | 8.0370 | 868 | 0.5316 | 0.7267 | 0.5316 | 0.7291 |
| 0.3612 | 8.0556 | 870 | 0.5101 | 0.7267 | 0.5101 | 0.7142 |
| 0.3612 | 8.0741 | 872 | 0.5039 | 0.7267 | 0.5039 | 0.7099 |
| 0.3612 | 8.0926 | 874 | 0.5033 | 0.7267 | 0.5033 | 0.7095 |
| 0.3612 | 8.1111 | 876 | 0.4982 | 0.7794 | 0.4982 | 0.7058 |
| 0.3612 | 8.1296 | 878 | 0.5067 | 0.7794 | 0.5067 | 0.7118 |
| 0.3612 | 8.1481 | 880 | 0.5242 | 0.7267 | 0.5242 | 0.7240 |
| 0.3612 | 8.1667 | 882 | 0.5403 | 0.7267 | 0.5403 | 0.7350 |
| 0.3612 | 8.1852 | 884 | 0.5511 | 0.7267 | 0.5511 | 0.7424 |
| 0.3612 | 8.2037 | 886 | 0.5481 | 0.7267 | 0.5481 | 0.7404 |
| 0.3612 | 8.2222 | 888 | 0.5493 | 0.7267 | 0.5493 | 0.7411 |
| 0.3612 | 8.2407 | 890 | 0.5524 | 0.7267 | 0.5524 | 0.7432 |
| 0.3612 | 8.2593 | 892 | 0.5499 | 0.7267 | 0.5499 | 0.7416 |
| 0.3612 | 8.2778 | 894 | 0.5601 | 0.7267 | 0.5601 | 0.7484 |
| 0.3612 | 8.2963 | 896 | 0.5766 | 0.7267 | 0.5766 | 0.7593 |
| 0.3612 | 8.3148 | 898 | 0.5784 | 0.7267 | 0.5784 | 0.7605 |
| 0.3612 | 8.3333 | 900 | 0.5671 | 0.7267 | 0.5671 | 0.7531 |
| 0.3612 | 8.3519 | 902 | 0.5491 | 0.7267 | 0.5491 | 0.7410 |
| 0.3612 | 8.3704 | 904 | 0.5475 | 0.7267 | 0.5475 | 0.7399 |
| 0.3612 | 8.3889 | 906 | 0.5429 | 0.7267 | 0.5429 | 0.7368 |
| 0.3612 | 8.4074 | 908 | 0.5422 | 0.7267 | 0.5422 | 0.7364 |
| 0.3612 | 8.4259 | 910 | 0.5461 | 0.7267 | 0.5461 | 0.7390 |
| 0.3612 | 8.4444 | 912 | 0.5593 | 0.7267 | 0.5593 | 0.7479 |
| 0.3612 | 8.4630 | 914 | 0.5732 | 0.7267 | 0.5732 | 0.7571 |
| 0.3612 | 8.4815 | 916 | 0.5812 | 0.7267 | 0.5812 | 0.7624 |
| 0.3612 | 8.5 | 918 | 0.5892 | 0.7267 | 0.5892 | 0.7676 |
| 0.3612 | 8.5185 | 920 | 0.5941 | 0.7267 | 0.5941 | 0.7708 |
| 0.3612 | 8.5370 | 922 | 0.6049 | 0.7267 | 0.6049 | 0.7778 |
| 0.3612 | 8.5556 | 924 | 0.6141 | 0.7267 | 0.6141 | 0.7836 |
| 0.3612 | 8.5741 | 926 | 0.6163 | 0.7267 | 0.6163 | 0.7850 |
| 0.3612 | 8.5926 | 928 | 0.6224 | 0.7267 | 0.6224 | 0.7889 |
| 0.3612 | 8.6111 | 930 | 0.6298 | 0.7018 | 0.6298 | 0.7936 |
| 0.3612 | 8.6296 | 932 | 0.6295 | 0.7018 | 0.6295 | 0.7934 |
| 0.3612 | 8.6481 | 934 | 0.6365 | 0.7018 | 0.6365 | 0.7978 |
| 0.3612 | 8.6667 | 936 | 0.6409 | 0.7018 | 0.6409 | 0.8005 |
| 0.3612 | 8.6852 | 938 | 0.6327 | 0.7018 | 0.6327 | 0.7955 |
| 0.3612 | 8.7037 | 940 | 0.6179 | 0.7018 | 0.6179 | 0.7861 |
| 0.3612 | 8.7222 | 942 | 0.5986 | 0.7267 | 0.5986 | 0.7737 |
| 0.3612 | 8.7407 | 944 | 0.5739 | 0.7267 | 0.5739 | 0.7575 |
| 0.3612 | 8.7593 | 946 | 0.5460 | 0.7267 | 0.5460 | 0.7389 |
| 0.3612 | 8.7778 | 948 | 0.5123 | 0.7267 | 0.5123 | 0.7158 |
| 0.3612 | 8.7963 | 950 | 0.4858 | 0.7393 | 0.4858 | 0.6970 |
| 0.3612 | 8.8148 | 952 | 0.4779 | 0.7267 | 0.4779 | 0.6913 |
| 0.3612 | 8.8333 | 954 | 0.4824 | 0.7267 | 0.4824 | 0.6945 |
| 0.3612 | 8.8519 | 956 | 0.4980 | 0.7267 | 0.4980 | 0.7057 |
| 0.3612 | 8.8704 | 958 | 0.5258 | 0.7267 | 0.5258 | 0.7251 |
| 0.3612 | 8.8889 | 960 | 0.5625 | 0.7267 | 0.5625 | 0.7500 |
| 0.3612 | 8.9074 | 962 | 0.5913 | 0.7018 | 0.5913 | 0.7690 |
| 0.3612 | 8.9259 | 964 | 0.6153 | 0.7018 | 0.6153 | 0.7844 |
| 0.3612 | 8.9444 | 966 | 0.6281 | 0.7040 | 0.6281 | 0.7926 |
| 0.3612 | 8.9630 | 968 | 0.6304 | 0.7040 | 0.6304 | 0.7940 |
| 0.3612 | 8.9815 | 970 | 0.6295 | 0.7040 | 0.6295 | 0.7934 |
| 0.3612 | 9.0 | 972 | 0.6284 | 0.7040 | 0.6284 | 0.7927 |
| 0.3612 | 9.0185 | 974 | 0.6157 | 0.7018 | 0.6157 | 0.7846 |
| 0.3612 | 9.0370 | 976 | 0.5976 | 0.7018 | 0.5976 | 0.7731 |
| 0.3612 | 9.0556 | 978 | 0.5780 | 0.7018 | 0.5780 | 0.7603 |
| 0.3612 | 9.0741 | 980 | 0.5660 | 0.7267 | 0.5660 | 0.7524 |
| 0.3612 | 9.0926 | 982 | 0.5576 | 0.7267 | 0.5576 | 0.7467 |
| 0.3612 | 9.1111 | 984 | 0.5509 | 0.7267 | 0.5509 | 0.7422 |
| 0.3612 | 9.1296 | 986 | 0.5470 | 0.7267 | 0.5470 | 0.7396 |
| 0.3612 | 9.1481 | 988 | 0.5443 | 0.7267 | 0.5443 | 0.7377 |
| 0.3612 | 9.1667 | 990 | 0.5455 | 0.7267 | 0.5455 | 0.7386 |
| 0.3612 | 9.1852 | 992 | 0.5498 | 0.7267 | 0.5498 | 0.7415 |
| 0.3612 | 9.2037 | 994 | 0.5622 | 0.7018 | 0.5622 | 0.7498 |
| 0.3612 | 9.2222 | 996 | 0.5778 | 0.7018 | 0.5778 | 0.7601 |
| 0.3612 | 9.2407 | 998 | 0.5882 | 0.7018 | 0.5882 | 0.7669 |
| 0.064 | 9.2593 | 1000 | 0.6001 | 0.7018 | 0.6001 | 0.7746 |
| 0.064 | 9.2778 | 1002 | 0.6051 | 0.7018 | 0.6051 | 0.7779 |
| 0.064 | 9.2963 | 1004 | 0.6147 | 0.7040 | 0.6147 | 0.7840 |
| 0.064 | 9.3148 | 1006 | 0.6215 | 0.7040 | 0.6215 | 0.7883 |
| 0.064 | 9.3333 | 1008 | 0.6282 | 0.7040 | 0.6282 | 0.7926 |
| 0.064 | 9.3519 | 1010 | 0.6388 | 0.7451 | 0.6388 | 0.7992 |
| 0.064 | 9.3704 | 1012 | 0.6445 | 0.7451 | 0.6445 | 0.8028 |
| 0.064 | 9.3889 | 1014 | 0.6441 | 0.7451 | 0.6441 | 0.8025 |
| 0.064 | 9.4074 | 1016 | 0.6384 | 0.7451 | 0.6384 | 0.7990 |
| 0.064 | 9.4259 | 1018 | 0.6289 | 0.7451 | 0.6289 | 0.7930 |
| 0.064 | 9.4444 | 1020 | 0.6178 | 0.7040 | 0.6178 | 0.7860 |
| 0.064 | 9.4630 | 1022 | 0.6028 | 0.7040 | 0.6028 | 0.7764 |
| 0.064 | 9.4815 | 1024 | 0.5922 | 0.7018 | 0.5922 | 0.7695 |
| 0.064 | 9.5 | 1026 | 0.5824 | 0.7018 | 0.5824 | 0.7632 |
| 0.064 | 9.5185 | 1028 | 0.5738 | 0.7267 | 0.5738 | 0.7575 |
| 0.064 | 9.5370 | 1030 | 0.5689 | 0.7267 | 0.5689 | 0.7542 |
| 0.064 | 9.5556 | 1032 | 0.5624 | 0.7267 | 0.5624 | 0.7500 |
| 0.064 | 9.5741 | 1034 | 0.5573 | 0.7267 | 0.5573 | 0.7465 |
| 0.064 | 9.5926 | 1036 | 0.5527 | 0.7267 | 0.5527 | 0.7435 |
| 0.064 | 9.6111 | 1038 | 0.5482 | 0.7267 | 0.5482 | 0.7404 |
| 0.064 | 9.6296 | 1040 | 0.5450 | 0.7267 | 0.5450 | 0.7382 |
| 0.064 | 9.6481 | 1042 | 0.5425 | 0.7267 | 0.5425 | 0.7366 |
| 0.064 | 9.6667 | 1044 | 0.5402 | 0.7267 | 0.5402 | 0.7350 |
| 0.064 | 9.6852 | 1046 | 0.5383 | 0.7267 | 0.5383 | 0.7337 |
| 0.064 | 9.7037 | 1048 | 0.5391 | 0.7267 | 0.5391 | 0.7342 |
| 0.064 | 9.7222 | 1050 | 0.5401 | 0.7267 | 0.5401 | 0.7349 |
| 0.064 | 9.7407 | 1052 | 0.5410 | 0.7267 | 0.5410 | 0.7355 |
| 0.064 | 9.7593 | 1054 | 0.5421 | 0.7267 | 0.5421 | 0.7363 |
| 0.064 | 9.7778 | 1056 | 0.5421 | 0.7267 | 0.5421 | 0.7363 |
| 0.064 | 9.7963 | 1058 | 0.5421 | 0.7267 | 0.5421 | 0.7362 |
| 0.064 | 9.8148 | 1060 | 0.5418 | 0.7267 | 0.5418 | 0.7361 |
| 0.064 | 9.8333 | 1062 | 0.5416 | 0.7267 | 0.5416 | 0.7359 |
| 0.064 | 9.8519 | 1064 | 0.5421 | 0.7267 | 0.5421 | 0.7363 |
| 0.064 | 9.8704 | 1066 | 0.5425 | 0.7267 | 0.5425 | 0.7365 |
| 0.064 | 9.8889 | 1068 | 0.5430 | 0.7267 | 0.5430 | 0.7369 |
| 0.064 | 9.9074 | 1070 | 0.5428 | 0.7267 | 0.5428 | 0.7368 |
| 0.064 | 9.9259 | 1072 | 0.5423 | 0.7267 | 0.5423 | 0.7364 |
| 0.064 | 9.9444 | 1074 | 0.5424 | 0.7267 | 0.5424 | 0.7365 |
| 0.064 | 9.9630 | 1076 | 0.5427 | 0.7267 | 0.5427 | 0.7367 |
| 0.064 | 9.9815 | 1078 | 0.5429 | 0.7267 | 0.5429 | 0.7368 |
| 0.064 | 10.0 | 1080 | 0.5431 | 0.7267 | 0.5431 | 0.7369 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
furrutiav/roberta_mixtral_nllfg_rubric_sst2_tf_idf_perplexity | furrutiav | 2024-11-25T16:17:13Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-11-25T16:16:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
exala/db_fe2_2.2 | exala | 2024-11-25T16:14:11Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T16:13:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camidenecken/RoBERTa-RM1-v2-4-rm-v29 | camidenecken | 2024-11-25T16:12:06Z | 186 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T16:11:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/yosefw-llama-3.2-180m-amharic-instruct-apo-bnb-8bit-smashed | PrunaAI | 2024-11-25T16:11:11Z | 5 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:yosefw/llama-3.2-180m-amharic-instruct-apo",
"base_model:quantized:yosefw/llama-3.2-180m-amharic-instruct-apo",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-25T15:07:35Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: yosefw/llama-3.2-180m-amharic-instruct-apo
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo yosefw/llama-3.2-180m-amharic-instruct-apo installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/yosefw-llama-3.2-180m-amharic-instruct-apo-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("yosefw/llama-3.2-180m-amharic-instruct-apo")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model yosefw/llama-3.2-180m-amharic-instruct-apo before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
ctranslate2-4you/Qwen2.5-0.5B-Instruct-ct2-int8 | ctranslate2-4you | 2024-11-25T16:10:29Z | 7 | 0 | null | [
"ctranslate2",
"Qwen2.5",
"chat",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"region:us"
] | null | 2024-11-25T14:14:31Z | ---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
base_model_relation: quantized
tags:
- ctranslate2
- Qwen2.5
- chat
---
Conversion of https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct into the ```ctranslate2``` format using ```int8``` quantization.
NOTE #1: This requires a version of ```ctranslate2``` GREATER THAN 4.5.0.
NOTE #2: The sample scripts below require ```pip``` installing the necessary ```CUDA``` and ```CUDNN``` libraries. If you rely on a systemwide installation instead, adjust your code accordingly.
Requirements:
- torch 2.4.0+cu124
- nvidia-cublas-cu12 12.4.2.65
- nvidia-cuda-nvrtc-cu12 12.4.99
- nvidia-cuda-runtime-cu12 12.4.99
- nvidia-cudnn-cu12 9.1.0.70
- numpy==1.26.4 (YOU MUST DOWNGRADE FROM THE NUMPY VERSION THAT CTRANSLATE2 INSTALLS BY DEFAULT)
- All other traditional dependencies like ```transformers```, ```accelerate```, etc.
<details><summary>Sample Script #1 (non-streaming):</summary>
```Python
import sys
import os
os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'
from pathlib import Path
def set_cuda_paths():
venv_base = Path(sys.executable).parent.parent
nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
cuda_path = nvidia_base_path / 'cuda_runtime' / 'bin'
cublas_path = nvidia_base_path / 'cublas' / 'bin'
cudnn_path = nvidia_base_path / 'cudnn' / 'bin'
nvrtc_path = nvidia_base_path / 'cuda_nvrtc' / 'bin'
paths_to_add = [
str(cuda_path),
str(cublas_path),
str(cudnn_path),
str(nvrtc_path),
]
env_vars = ['CUDA_PATH', 'CUDA_PATH_V12_4', 'PATH']
for env_var in env_vars:
current_value = os.environ.get(env_var, '')
new_value = os.pathsep.join(paths_to_add + [current_value] if current_value else paths_to_add)
os.environ[env_var] = new_value
set_cuda_paths()
import ctranslate2
import gc
import torch
from transformers import AutoTokenizer
import pynvml
from constants import user_message, system_message
pynvml.nvmlInit()
handle = pynvml.nvmlDeviceGetHandleByIndex(0)
model_dir = r"[INSERT PATH TO FOLDER CONTAINING THE MODEL FILES HERE]"
def build_prompt():
prompt = f"""<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
"""
return prompt
def main():
model_name = os.path.basename(model_dir)
beam_size_value = 1
intra_threads = max(os.cpu_count() - 4, 4)
generator = ctranslate2.Generator(
model_dir,
device="cuda",
compute_type="int8",
intra_threads=intra_threads
)
tokenizer = AutoTokenizer.from_pretrained(model_dir, add_prefix_space=None)
prompt = build_prompt()
tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results_batch = generator.generate_batch(
[tokens],
include_prompt_in_result=False,
max_batch_size=4096,
batch_type="tokens",
beam_size=beam_size_value,
num_hypotheses=1,
max_length=512,
sampling_temperature=0.0,
)
output = tokenizer.decode(results_batch[0].sequences_ids[0])
print("\nGenerated response:\n")
print(output)
del generator
del tokenizer
torch.cuda.empty_cache()
gc.collect()
if __name__ == "__main__":
main()
```
</details>
<details><summary>Sample Script #2 (streaming)</summary>
```Python
import sys
import os
os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'
from pathlib import Path
def set_cuda_paths():
venv_base = Path(sys.executable).parent.parent
nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
cuda_path = nvidia_base_path / 'cuda_runtime' / 'bin'
cublas_path = nvidia_base_path / 'cublas' / 'bin'
cudnn_path = nvidia_base_path / 'cudnn' / 'bin'
nvrtc_path = nvidia_base_path / 'cuda_nvrtc' / 'bin'
paths_to_add = [
str(cuda_path),
str(cublas_path),
str(cudnn_path),
str(nvrtc_path),
]
env_vars = ['CUDA_PATH', 'CUDA_PATH_V12_4', 'PATH']
for env_var in env_vars:
current_value = os.environ.get(env_var, '')
new_value = os.pathsep.join(paths_to_add + [current_value] if current_value else paths_to_add)
os.environ[env_var] = new_value
set_cuda_paths()
import ctranslate2
import gc
import torch
from transformers import AutoTokenizer
import pynvml
from constants import user_message, system_message
pynvml.nvmlInit()
handle = pynvml.nvmlDeviceGetHandleByIndex(0)
model_dir = r"[PATH TO FOLDER CONTAINING THE MODEL FILES]"
def build_prompt():
prompt = f"""<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
"""
return prompt
def main():
generator = ctranslate2.Generator(
model_dir,
device="cuda",
compute_type="int8",
)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
prompt = build_prompt()
tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
# Initialize token iterator
token_iterator = generator.generate_tokens(
[tokens],
max_length=512,
sampling_temperature=0.0
)
decoded_output = ""
tokens_buffer = []
try:
for token_result in token_iterator:
token_id = token_result.token_id
token = tokenizer.convert_ids_to_tokens(token_id)
if token_id == tokenizer.eos_token_id:
break
is_new_word = token.startswith("Δ ")
if is_new_word and tokens_buffer:
word = tokenizer.decode(tokens_buffer)
print(word, end='', flush=True)
decoded_output += word
tokens_buffer = []
tokens_buffer.append(token_id)
if tokens_buffer:
word = tokenizer.decode(tokens_buffer)
print(word, end='', flush=True)
decoded_output += word
except KeyboardInterrupt:
print("\nGeneration interrupted")
del generator
del tokenizer
torch.cuda.empty_cache()
gc.collect()
if __name__ == "__main__":
main()
```
</details> |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k20_task5_organization_fold1 | MayBashendy | 2024-11-25T16:08:29Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:59:07Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k20_task5_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k20_task5_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7190
- Qwk: 0.7418
- Mse: 0.7190
- Rmse: 0.8479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0208 | 2 | 3.7011 | -0.0414 | 3.7011 | 1.9238 |
| No log | 0.0417 | 4 | 2.4385 | -0.0438 | 2.4385 | 1.5616 |
| No log | 0.0625 | 6 | 1.7510 | -0.1905 | 1.7510 | 1.3232 |
| No log | 0.0833 | 8 | 1.3098 | 0.0074 | 1.3098 | 1.1445 |
| No log | 0.1042 | 10 | 1.3472 | 0.1337 | 1.3472 | 1.1607 |
| No log | 0.125 | 12 | 1.3634 | 0.1784 | 1.3634 | 1.1677 |
| No log | 0.1458 | 14 | 1.2057 | 0.4609 | 1.2057 | 1.0980 |
| No log | 0.1667 | 16 | 0.9815 | 0.2279 | 0.9815 | 0.9907 |
| No log | 0.1875 | 18 | 0.9497 | 0.1571 | 0.9497 | 0.9746 |
| No log | 0.2083 | 20 | 0.9661 | 0.1571 | 0.9661 | 0.9829 |
| No log | 0.2292 | 22 | 0.9711 | 0.1571 | 0.9711 | 0.9854 |
| No log | 0.25 | 24 | 0.9842 | 0.1571 | 0.9842 | 0.9921 |
| No log | 0.2708 | 26 | 1.0060 | 0.1571 | 1.0060 | 1.0030 |
| No log | 0.2917 | 28 | 1.0494 | 0.1571 | 1.0494 | 1.0244 |
| No log | 0.3125 | 30 | 1.0682 | 0.2279 | 1.0682 | 1.0335 |
| No log | 0.3333 | 32 | 1.0762 | 0.2279 | 1.0762 | 1.0374 |
| No log | 0.3542 | 34 | 1.1360 | 0.3354 | 1.1360 | 1.0659 |
| No log | 0.375 | 36 | 1.0428 | 0.2857 | 1.0428 | 1.0212 |
| No log | 0.3958 | 38 | 0.9391 | 0.2279 | 0.9391 | 0.9690 |
| No log | 0.4167 | 40 | 0.9369 | 0.2279 | 0.9369 | 0.9679 |
| No log | 0.4375 | 42 | 0.9362 | 0.1571 | 0.9362 | 0.9676 |
| No log | 0.4583 | 44 | 0.9314 | 0.0 | 0.9314 | 0.9651 |
| No log | 0.4792 | 46 | 0.8929 | 0.0734 | 0.8929 | 0.9449 |
| No log | 0.5 | 48 | 0.7952 | 0.3448 | 0.7952 | 0.8918 |
| No log | 0.5208 | 50 | 0.8192 | 0.4097 | 0.8192 | 0.9051 |
| No log | 0.5417 | 52 | 0.8258 | 0.5318 | 0.8258 | 0.9087 |
| No log | 0.5625 | 54 | 0.7687 | 0.5779 | 0.7687 | 0.8767 |
| No log | 0.5833 | 56 | 0.7009 | 0.6439 | 0.7009 | 0.8372 |
| No log | 0.6042 | 58 | 0.6385 | 0.6439 | 0.6385 | 0.7991 |
| No log | 0.625 | 60 | 0.6227 | 0.7399 | 0.6227 | 0.7891 |
| No log | 0.6458 | 62 | 0.7400 | 0.6073 | 0.7400 | 0.8602 |
| No log | 0.6667 | 64 | 1.0882 | 0.2932 | 1.0882 | 1.0432 |
| No log | 0.6875 | 66 | 1.3061 | 0.1501 | 1.3061 | 1.1429 |
| No log | 0.7083 | 68 | 1.2952 | 0.2171 | 1.2952 | 1.1381 |
| No log | 0.7292 | 70 | 1.1006 | 0.2279 | 1.1006 | 1.0491 |
| No log | 0.75 | 72 | 0.7765 | 0.4516 | 0.7765 | 0.8812 |
| No log | 0.7708 | 74 | 0.6270 | 0.4984 | 0.6270 | 0.7918 |
| No log | 0.7917 | 76 | 0.7927 | 0.4586 | 0.7927 | 0.8903 |
| No log | 0.8125 | 78 | 0.9862 | 0.4379 | 0.9862 | 0.9931 |
| No log | 0.8333 | 80 | 1.0245 | 0.3884 | 1.0245 | 1.0122 |
| No log | 0.8542 | 82 | 0.9010 | 0.2857 | 0.9010 | 0.9492 |
| No log | 0.875 | 84 | 0.7488 | 0.3493 | 0.7488 | 0.8653 |
| No log | 0.8958 | 86 | 0.7518 | 0.6037 | 0.7518 | 0.8671 |
| No log | 0.9167 | 88 | 0.7576 | 0.5821 | 0.7576 | 0.8704 |
| No log | 0.9375 | 90 | 0.7081 | 0.5584 | 0.7081 | 0.8415 |
| No log | 0.9583 | 92 | 0.7491 | 0.5161 | 0.7491 | 0.8655 |
| No log | 0.9792 | 94 | 0.7663 | 0.4684 | 0.7663 | 0.8754 |
| No log | 1.0 | 96 | 0.7239 | 0.5584 | 0.7239 | 0.8508 |
| No log | 1.0208 | 98 | 0.7146 | 0.5584 | 0.7146 | 0.8454 |
| No log | 1.0417 | 100 | 0.7224 | 0.6012 | 0.7224 | 0.8500 |
| No log | 1.0625 | 102 | 0.7056 | 0.6012 | 0.7056 | 0.8400 |
| No log | 1.0833 | 104 | 0.7155 | 0.6012 | 0.7155 | 0.8459 |
| No log | 1.1042 | 106 | 0.6799 | 0.5584 | 0.6799 | 0.8246 |
| No log | 1.125 | 108 | 0.6474 | 0.5584 | 0.6474 | 0.8046 |
| No log | 1.1458 | 110 | 0.6747 | 0.6460 | 0.6747 | 0.8214 |
| No log | 1.1667 | 112 | 0.6933 | 0.6460 | 0.6933 | 0.8326 |
| No log | 1.1875 | 114 | 0.6585 | 0.6460 | 0.6585 | 0.8115 |
| No log | 1.2083 | 116 | 0.6325 | 0.6037 | 0.6325 | 0.7953 |
| No log | 1.2292 | 118 | 0.6223 | 0.5870 | 0.6223 | 0.7889 |
| No log | 1.25 | 120 | 0.6351 | 0.6264 | 0.6351 | 0.7969 |
| No log | 1.2708 | 122 | 0.7259 | 0.6094 | 0.7259 | 0.8520 |
| No log | 1.2917 | 124 | 0.7670 | 0.6247 | 0.7670 | 0.8758 |
| No log | 1.3125 | 126 | 0.7328 | 0.6094 | 0.7328 | 0.8561 |
| No log | 1.3333 | 128 | 0.6082 | 0.6817 | 0.6082 | 0.7799 |
| No log | 1.3542 | 130 | 0.5767 | 0.5924 | 0.5767 | 0.7594 |
| No log | 1.375 | 132 | 0.6116 | 0.6817 | 0.6116 | 0.7821 |
| No log | 1.3958 | 134 | 0.6693 | 0.6094 | 0.6693 | 0.8181 |
| No log | 1.4167 | 136 | 0.6543 | 0.5355 | 0.6543 | 0.8089 |
| No log | 1.4375 | 138 | 0.6523 | 0.5518 | 0.6523 | 0.8076 |
| No log | 1.4583 | 140 | 0.6589 | 0.5518 | 0.6589 | 0.8118 |
| No log | 1.4792 | 142 | 0.6673 | 0.4100 | 0.6673 | 0.8169 |
| No log | 1.5 | 144 | 0.6712 | 0.4540 | 0.6712 | 0.8193 |
| No log | 1.5208 | 146 | 0.6968 | 0.4100 | 0.6968 | 0.8347 |
| No log | 1.5417 | 148 | 0.6734 | 0.4540 | 0.6734 | 0.8206 |
| No log | 1.5625 | 150 | 0.6436 | 0.5355 | 0.6436 | 0.8023 |
| No log | 1.5833 | 152 | 0.6589 | 0.6094 | 0.6589 | 0.8117 |
| No log | 1.6042 | 154 | 0.7504 | 0.6247 | 0.7504 | 0.8662 |
| No log | 1.625 | 156 | 1.0720 | 0.4859 | 1.0720 | 1.0354 |
| No log | 1.6458 | 158 | 1.4410 | 0.3258 | 1.4410 | 1.2004 |
| No log | 1.6667 | 160 | 1.2955 | 0.3349 | 1.2955 | 1.1382 |
| No log | 1.6875 | 162 | 1.1476 | 0.4221 | 1.1476 | 1.0713 |
| No log | 1.7083 | 164 | 0.9996 | 0.5526 | 0.9996 | 0.9998 |
| No log | 1.7292 | 166 | 1.1110 | 0.5526 | 1.1110 | 1.0540 |
| No log | 1.75 | 168 | 1.4659 | 0.3797 | 1.4659 | 1.2107 |
| No log | 1.7708 | 170 | 1.7049 | 0.3861 | 1.7049 | 1.3057 |
| No log | 1.7917 | 172 | 1.6896 | 0.3390 | 1.6896 | 1.2998 |
| No log | 1.8125 | 174 | 1.2575 | 0.4839 | 1.2575 | 1.1214 |
| No log | 1.8333 | 176 | 0.8148 | 0.6088 | 0.8148 | 0.9026 |
| No log | 1.8542 | 178 | 0.7265 | 0.6429 | 0.7265 | 0.8524 |
| No log | 1.875 | 180 | 0.7599 | 0.4901 | 0.7599 | 0.8717 |
| No log | 1.8958 | 182 | 0.8335 | 0.5687 | 0.8335 | 0.9130 |
| No log | 1.9167 | 184 | 0.8147 | 0.5687 | 0.8147 | 0.9026 |
| No log | 1.9375 | 186 | 0.7760 | 0.625 | 0.7760 | 0.8809 |
| No log | 1.9583 | 188 | 0.7276 | 0.6610 | 0.7276 | 0.8530 |
| No log | 1.9792 | 190 | 0.6921 | 0.6782 | 0.6921 | 0.8319 |
| No log | 2.0 | 192 | 0.6482 | 0.6782 | 0.6482 | 0.8051 |
| No log | 2.0208 | 194 | 0.6232 | 0.6948 | 0.6232 | 0.7894 |
| No log | 2.0417 | 196 | 0.6357 | 0.6610 | 0.6357 | 0.7973 |
| No log | 2.0625 | 198 | 0.6399 | 0.6462 | 0.6399 | 0.8000 |
| No log | 2.0833 | 200 | 0.7567 | 0.6606 | 0.7567 | 0.8699 |
| No log | 2.1042 | 202 | 0.8084 | 0.6606 | 0.8084 | 0.8991 |
| No log | 2.125 | 204 | 0.9602 | 0.5800 | 0.9602 | 0.9799 |
| No log | 2.1458 | 206 | 0.9414 | 0.6603 | 0.9414 | 0.9703 |
| No log | 2.1667 | 208 | 0.9120 | 0.6606 | 0.9120 | 0.9550 |
| No log | 2.1875 | 210 | 0.8045 | 0.6606 | 0.8045 | 0.8969 |
| No log | 2.2083 | 212 | 0.7227 | 0.6610 | 0.7227 | 0.8501 |
| No log | 2.2292 | 214 | 0.6818 | 0.6610 | 0.6818 | 0.8257 |
| No log | 2.25 | 216 | 0.7249 | 0.6610 | 0.7249 | 0.8514 |
| No log | 2.2708 | 218 | 0.8788 | 0.6603 | 0.8788 | 0.9374 |
| No log | 2.2917 | 220 | 0.8864 | 0.5857 | 0.8864 | 0.9415 |
| No log | 2.3125 | 222 | 0.7320 | 0.6614 | 0.7320 | 0.8556 |
| No log | 2.3333 | 224 | 0.6556 | 0.7036 | 0.6556 | 0.8097 |
| No log | 2.3542 | 226 | 0.5894 | 0.7210 | 0.5894 | 0.7677 |
| No log | 2.375 | 228 | 0.5979 | 0.7210 | 0.5979 | 0.7732 |
| No log | 2.3958 | 230 | 0.6902 | 0.6888 | 0.6902 | 0.8308 |
| No log | 2.4167 | 232 | 0.7470 | 0.6603 | 0.7470 | 0.8643 |
| No log | 2.4375 | 234 | 0.6721 | 0.6606 | 0.6721 | 0.8198 |
| No log | 2.4583 | 236 | 0.5689 | 0.6782 | 0.5689 | 0.7543 |
| No log | 2.4792 | 238 | 0.5290 | 0.7136 | 0.5290 | 0.7273 |
| No log | 2.5 | 240 | 0.5427 | 0.6505 | 0.5427 | 0.7367 |
| No log | 2.5208 | 242 | 0.5439 | 0.6114 | 0.5439 | 0.7375 |
| No log | 2.5417 | 244 | 0.6309 | 0.6094 | 0.6309 | 0.7943 |
| No log | 2.5625 | 246 | 0.8018 | 0.5972 | 0.8018 | 0.8954 |
| No log | 2.5833 | 248 | 0.7668 | 0.5420 | 0.7668 | 0.8757 |
| No log | 2.6042 | 250 | 0.6114 | 0.6094 | 0.6114 | 0.7819 |
| No log | 2.625 | 252 | 0.5703 | 0.5990 | 0.5703 | 0.7552 |
| No log | 2.6458 | 254 | 0.5820 | 0.5801 | 0.5820 | 0.7629 |
| No log | 2.6667 | 256 | 0.5924 | 0.5990 | 0.5924 | 0.7697 |
| No log | 2.6875 | 258 | 0.6686 | 0.6094 | 0.6686 | 0.8177 |
| No log | 2.7083 | 260 | 0.7324 | 0.5420 | 0.7324 | 0.8558 |
| No log | 2.7292 | 262 | 0.7380 | 0.6094 | 0.7380 | 0.8590 |
| No log | 2.75 | 264 | 0.7309 | 0.6094 | 0.7309 | 0.8549 |
| No log | 2.7708 | 266 | 0.6611 | 0.6094 | 0.6611 | 0.8131 |
| No log | 2.7917 | 268 | 0.6155 | 0.6456 | 0.6155 | 0.7845 |
| No log | 2.8125 | 270 | 0.5598 | 0.6474 | 0.5598 | 0.7482 |
| No log | 2.8333 | 272 | 0.6019 | 0.6267 | 0.6019 | 0.7758 |
| No log | 2.8542 | 274 | 0.7179 | 0.6094 | 0.7179 | 0.8473 |
| No log | 2.875 | 276 | 0.7198 | 0.6094 | 0.7198 | 0.8484 |
| No log | 2.8958 | 278 | 0.6403 | 0.6267 | 0.6403 | 0.8002 |
| No log | 2.9167 | 280 | 0.5768 | 0.6267 | 0.5768 | 0.7595 |
| No log | 2.9375 | 282 | 0.5532 | 0.6267 | 0.5532 | 0.7438 |
| No log | 2.9583 | 284 | 0.5678 | 0.6267 | 0.5678 | 0.7535 |
| No log | 2.9792 | 286 | 0.6677 | 0.6782 | 0.6677 | 0.8172 |
| No log | 3.0 | 288 | 0.8790 | 0.5682 | 0.8790 | 0.9376 |
| No log | 3.0208 | 290 | 0.9633 | 0.5783 | 0.9633 | 0.9815 |
| No log | 3.0417 | 292 | 1.0460 | 0.5783 | 1.0460 | 1.0227 |
| No log | 3.0625 | 294 | 1.1069 | 0.4784 | 1.1069 | 1.0521 |
| No log | 3.0833 | 296 | 0.9274 | 0.5678 | 0.9274 | 0.9630 |
| No log | 3.1042 | 298 | 0.6767 | 0.7053 | 0.6767 | 0.8226 |
| No log | 3.125 | 300 | 0.6318 | 0.7085 | 0.6318 | 0.7948 |
| No log | 3.1458 | 302 | 0.7266 | 0.6606 | 0.7266 | 0.8524 |
| No log | 3.1667 | 304 | 0.8306 | 0.6603 | 0.8306 | 0.9114 |
| No log | 3.1875 | 306 | 0.8492 | 0.6606 | 0.8492 | 0.9215 |
| No log | 3.2083 | 308 | 0.7816 | 0.6606 | 0.7816 | 0.8841 |
| No log | 3.2292 | 310 | 0.6829 | 0.6934 | 0.6829 | 0.8264 |
| No log | 3.25 | 312 | 0.6681 | 0.6934 | 0.6681 | 0.8174 |
| No log | 3.2708 | 314 | 0.6857 | 0.7241 | 0.6857 | 0.8281 |
| No log | 3.2917 | 316 | 0.6776 | 0.7418 | 0.6776 | 0.8231 |
| No log | 3.3125 | 318 | 0.6709 | 0.7254 | 0.6709 | 0.8191 |
| No log | 3.3333 | 320 | 0.7204 | 0.7053 | 0.7204 | 0.8488 |
| No log | 3.3542 | 322 | 0.7734 | 0.7024 | 0.7734 | 0.8794 |
| No log | 3.375 | 324 | 0.7302 | 0.7172 | 0.7302 | 0.8545 |
| No log | 3.3958 | 326 | 0.6286 | 0.7210 | 0.6286 | 0.7928 |
| No log | 3.4167 | 328 | 0.6132 | 0.7418 | 0.6132 | 0.7831 |
| No log | 3.4375 | 330 | 0.6468 | 0.6782 | 0.6468 | 0.8042 |
| No log | 3.4583 | 332 | 0.5939 | 0.6782 | 0.5939 | 0.7706 |
| No log | 3.4792 | 334 | 0.5561 | 0.6134 | 0.5561 | 0.7457 |
| No log | 3.5 | 336 | 0.5552 | 0.6134 | 0.5552 | 0.7451 |
| No log | 3.5208 | 338 | 0.5654 | 0.6632 | 0.5654 | 0.7519 |
| No log | 3.5417 | 340 | 0.6991 | 0.6767 | 0.6991 | 0.8361 |
| No log | 3.5625 | 342 | 0.8140 | 0.6603 | 0.8140 | 0.9022 |
| No log | 3.5833 | 344 | 0.7830 | 0.6606 | 0.7830 | 0.8849 |
| No log | 3.6042 | 346 | 0.7277 | 0.6909 | 0.7277 | 0.8530 |
| No log | 3.625 | 348 | 0.7159 | 0.7160 | 0.7159 | 0.8461 |
| No log | 3.6458 | 350 | 0.7340 | 0.7024 | 0.7340 | 0.8568 |
| No log | 3.6667 | 352 | 0.8029 | 0.6888 | 0.8029 | 0.8960 |
| No log | 3.6875 | 354 | 0.7444 | 0.7510 | 0.7444 | 0.8628 |
| No log | 3.7083 | 356 | 0.6048 | 0.7053 | 0.6048 | 0.7777 |
| No log | 3.7292 | 358 | 0.5510 | 0.7085 | 0.5510 | 0.7423 |
| No log | 3.75 | 360 | 0.5601 | 0.7085 | 0.5601 | 0.7484 |
| No log | 3.7708 | 362 | 0.6069 | 0.7085 | 0.6069 | 0.7791 |
| No log | 3.7917 | 364 | 0.5448 | 0.6643 | 0.5448 | 0.7381 |
| No log | 3.8125 | 366 | 0.5062 | 0.6643 | 0.5062 | 0.7115 |
| No log | 3.8333 | 368 | 0.4992 | 0.6495 | 0.4992 | 0.7065 |
| No log | 3.8542 | 370 | 0.5054 | 0.6643 | 0.5054 | 0.7109 |
| No log | 3.875 | 372 | 0.5181 | 0.6643 | 0.5181 | 0.7198 |
| No log | 3.8958 | 374 | 0.5842 | 0.6456 | 0.5842 | 0.7643 |
| No log | 3.9167 | 376 | 0.6201 | 0.6456 | 0.6201 | 0.7875 |
| No log | 3.9375 | 378 | 0.6011 | 0.6934 | 0.6011 | 0.7753 |
| No log | 3.9583 | 380 | 0.5808 | 0.6934 | 0.5808 | 0.7621 |
| No log | 3.9792 | 382 | 0.5520 | 0.7085 | 0.5520 | 0.7430 |
| No log | 4.0 | 384 | 0.5867 | 0.7085 | 0.5867 | 0.7660 |
| No log | 4.0208 | 386 | 0.7020 | 0.6603 | 0.7020 | 0.8379 |
| No log | 4.0417 | 388 | 0.7933 | 0.6603 | 0.7933 | 0.8907 |
| No log | 4.0625 | 390 | 0.8136 | 0.6458 | 0.8136 | 0.9020 |
| No log | 4.0833 | 392 | 0.8246 | 0.5910 | 0.8246 | 0.9081 |
| No log | 4.1042 | 394 | 0.7342 | 0.5824 | 0.7342 | 0.8568 |
| No log | 4.125 | 396 | 0.5961 | 0.6782 | 0.5961 | 0.7721 |
| No log | 4.1458 | 398 | 0.5394 | 0.6467 | 0.5394 | 0.7344 |
| No log | 4.1667 | 400 | 0.5397 | 0.7317 | 0.5397 | 0.7346 |
| No log | 4.1875 | 402 | 0.5678 | 0.7418 | 0.5678 | 0.7536 |
| No log | 4.2083 | 404 | 0.6792 | 0.6909 | 0.6792 | 0.8241 |
| No log | 4.2292 | 406 | 0.6988 | 0.6909 | 0.6988 | 0.8359 |
| No log | 4.25 | 408 | 0.6683 | 0.7198 | 0.6683 | 0.8175 |
| No log | 4.2708 | 410 | 0.6525 | 0.7036 | 0.6525 | 0.8078 |
| No log | 4.2917 | 412 | 0.6641 | 0.7148 | 0.6641 | 0.8149 |
| No log | 4.3125 | 414 | 0.6626 | 0.7148 | 0.6626 | 0.8140 |
| No log | 4.3333 | 416 | 0.6131 | 0.7222 | 0.6131 | 0.7830 |
| No log | 4.3542 | 418 | 0.6106 | 0.7222 | 0.6106 | 0.7814 |
| No log | 4.375 | 420 | 0.6849 | 0.7137 | 0.6849 | 0.8276 |
| No log | 4.3958 | 422 | 0.7866 | 0.6623 | 0.7866 | 0.8869 |
| No log | 4.4167 | 424 | 0.7545 | 0.5962 | 0.7545 | 0.8686 |
| No log | 4.4375 | 426 | 0.6560 | 0.6622 | 0.6560 | 0.8099 |
| No log | 4.4583 | 428 | 0.5517 | 0.7374 | 0.5517 | 0.7427 |
| No log | 4.4792 | 430 | 0.5236 | 0.6675 | 0.5236 | 0.7236 |
| No log | 4.5 | 432 | 0.5290 | 0.6675 | 0.5290 | 0.7273 |
| No log | 4.5208 | 434 | 0.5370 | 0.6514 | 0.5370 | 0.7328 |
| No log | 4.5417 | 436 | 0.6001 | 0.7374 | 0.6001 | 0.7747 |
| No log | 4.5625 | 438 | 0.7926 | 0.6452 | 0.7926 | 0.8903 |
| No log | 4.5833 | 440 | 0.9031 | 0.5910 | 0.9031 | 0.9503 |
| No log | 4.6042 | 442 | 0.8582 | 0.6458 | 0.8582 | 0.9264 |
| No log | 4.625 | 444 | 0.7727 | 0.6452 | 0.7727 | 0.8790 |
| No log | 4.6458 | 446 | 0.7203 | 0.6606 | 0.7203 | 0.8487 |
| No log | 4.6667 | 448 | 0.6920 | 0.6610 | 0.6920 | 0.8319 |
| No log | 4.6875 | 450 | 0.7367 | 0.6610 | 0.7367 | 0.8583 |
| No log | 4.7083 | 452 | 0.8269 | 0.6606 | 0.8269 | 0.9093 |
| No log | 4.7292 | 454 | 0.9118 | 0.6071 | 0.9118 | 0.9549 |
| No log | 4.75 | 456 | 0.9079 | 0.62 | 0.9079 | 0.9528 |
| No log | 4.7708 | 458 | 0.8365 | 0.62 | 0.8365 | 0.9146 |
| No log | 4.7917 | 460 | 0.7491 | 0.6473 | 0.7491 | 0.8655 |
| No log | 4.8125 | 462 | 0.7113 | 0.6767 | 0.7113 | 0.8434 |
| No log | 4.8333 | 464 | 0.7313 | 0.6767 | 0.7313 | 0.8552 |
| No log | 4.8542 | 466 | 0.7941 | 0.6477 | 0.7941 | 0.8911 |
| No log | 4.875 | 468 | 0.8415 | 0.6852 | 0.8415 | 0.9174 |
| No log | 4.8958 | 470 | 0.8336 | 0.6477 | 0.8336 | 0.9130 |
| No log | 4.9167 | 472 | 0.7393 | 0.6909 | 0.7393 | 0.8598 |
| No log | 4.9375 | 474 | 0.6122 | 0.7241 | 0.6122 | 0.7824 |
| No log | 4.9583 | 476 | 0.5679 | 0.7254 | 0.5679 | 0.7536 |
| No log | 4.9792 | 478 | 0.5886 | 0.7241 | 0.5886 | 0.7672 |
| No log | 5.0 | 480 | 0.6417 | 0.7241 | 0.6417 | 0.8011 |
| No log | 5.0208 | 482 | 0.6557 | 0.7241 | 0.6557 | 0.8098 |
| No log | 5.0417 | 484 | 0.6092 | 0.7085 | 0.6092 | 0.7805 |
| No log | 5.0625 | 486 | 0.5695 | 0.7254 | 0.5695 | 0.7547 |
| No log | 5.0833 | 488 | 0.5659 | 0.7545 | 0.5659 | 0.7523 |
| No log | 5.1042 | 490 | 0.5605 | 0.7386 | 0.5605 | 0.7487 |
| No log | 5.125 | 492 | 0.5565 | 0.7545 | 0.5565 | 0.7460 |
| No log | 5.1458 | 494 | 0.5932 | 0.7085 | 0.5932 | 0.7702 |
| No log | 5.1667 | 496 | 0.6855 | 0.7241 | 0.6855 | 0.8280 |
| No log | 5.1875 | 498 | 0.7112 | 0.6934 | 0.7112 | 0.8433 |
| 0.4258 | 5.2083 | 500 | 0.6656 | 0.6934 | 0.6656 | 0.8158 |
| 0.4258 | 5.2292 | 502 | 0.6273 | 0.6782 | 0.6273 | 0.7920 |
| 0.4258 | 5.25 | 504 | 0.6333 | 0.6934 | 0.6333 | 0.7958 |
| 0.4258 | 5.2708 | 506 | 0.6969 | 0.6610 | 0.6969 | 0.8348 |
| 0.4258 | 5.2917 | 508 | 0.7932 | 0.6610 | 0.7932 | 0.8906 |
| 0.4258 | 5.3125 | 510 | 0.8431 | 0.5625 | 0.8431 | 0.9182 |
| 0.4258 | 5.3333 | 512 | 0.8018 | 0.6606 | 0.8018 | 0.8955 |
| 0.4258 | 5.3542 | 514 | 0.7549 | 0.6606 | 0.7549 | 0.8689 |
| 0.4258 | 5.375 | 516 | 0.7456 | 0.6610 | 0.7456 | 0.8635 |
| 0.4258 | 5.3958 | 518 | 0.7993 | 0.62 | 0.7993 | 0.8940 |
| 0.4258 | 5.4167 | 520 | 0.8120 | 0.62 | 0.8120 | 0.9011 |
| 0.4258 | 5.4375 | 522 | 0.7604 | 0.6603 | 0.7604 | 0.8720 |
| 0.4258 | 5.4583 | 524 | 0.7137 | 0.6606 | 0.7137 | 0.8448 |
| 0.4258 | 5.4792 | 526 | 0.7237 | 0.6606 | 0.7237 | 0.8507 |
| 0.4258 | 5.5 | 528 | 0.7494 | 0.6603 | 0.7494 | 0.8657 |
| 0.4258 | 5.5208 | 530 | 0.7836 | 0.6603 | 0.7836 | 0.8852 |
| 0.4258 | 5.5417 | 532 | 0.7249 | 0.6610 | 0.7249 | 0.8514 |
| 0.4258 | 5.5625 | 534 | 0.6956 | 0.6610 | 0.6956 | 0.8340 |
| 0.4258 | 5.5833 | 536 | 0.6818 | 0.6610 | 0.6818 | 0.8257 |
| 0.4258 | 5.6042 | 538 | 0.7456 | 0.6606 | 0.7456 | 0.8635 |
| 0.4258 | 5.625 | 540 | 0.8460 | 0.6458 | 0.8460 | 0.9198 |
| 0.4258 | 5.6458 | 542 | 0.9422 | 0.5910 | 0.9422 | 0.9707 |
| 0.4258 | 5.6667 | 544 | 0.9159 | 0.6458 | 0.9159 | 0.9570 |
| 0.4258 | 5.6875 | 546 | 0.7840 | 0.6606 | 0.7840 | 0.8854 |
| 0.4258 | 5.7083 | 548 | 0.6827 | 0.6610 | 0.6827 | 0.8262 |
| 0.4258 | 5.7292 | 550 | 0.6640 | 0.6610 | 0.6640 | 0.8148 |
| 0.4258 | 5.75 | 552 | 0.6686 | 0.6610 | 0.6686 | 0.8177 |
| 0.4258 | 5.7708 | 554 | 0.7221 | 0.6610 | 0.7221 | 0.8498 |
| 0.4258 | 5.7917 | 556 | 0.7783 | 0.6445 | 0.7783 | 0.8822 |
| 0.4258 | 5.8125 | 558 | 0.8273 | 0.5824 | 0.8273 | 0.9096 |
| 0.4258 | 5.8333 | 560 | 0.8872 | 0.5910 | 0.8872 | 0.9419 |
| 0.4258 | 5.8542 | 562 | 0.8905 | 0.5910 | 0.8905 | 0.9437 |
| 0.4258 | 5.875 | 564 | 0.8493 | 0.6603 | 0.8493 | 0.9216 |
| 0.4258 | 5.8958 | 566 | 0.7374 | 0.6603 | 0.7374 | 0.8587 |
| 0.4258 | 5.9167 | 568 | 0.6854 | 0.6934 | 0.6854 | 0.8279 |
| 0.4258 | 5.9375 | 570 | 0.6526 | 0.6934 | 0.6526 | 0.8079 |
| 0.4258 | 5.9583 | 572 | 0.6718 | 0.6934 | 0.6718 | 0.8196 |
| 0.4258 | 5.9792 | 574 | 0.6972 | 0.6888 | 0.6972 | 0.8350 |
| 0.4258 | 6.0 | 576 | 0.6507 | 0.6934 | 0.6507 | 0.8067 |
| 0.4258 | 6.0208 | 578 | 0.6284 | 0.7085 | 0.6284 | 0.7927 |
| 0.4258 | 6.0417 | 580 | 0.6628 | 0.6934 | 0.6628 | 0.8141 |
| 0.4258 | 6.0625 | 582 | 0.7450 | 0.6603 | 0.7450 | 0.8631 |
| 0.4258 | 6.0833 | 584 | 0.7810 | 0.6603 | 0.7810 | 0.8837 |
| 0.4258 | 6.1042 | 586 | 0.7447 | 0.6603 | 0.7447 | 0.8629 |
| 0.4258 | 6.125 | 588 | 0.6805 | 0.6610 | 0.6805 | 0.8249 |
| 0.4258 | 6.1458 | 590 | 0.6155 | 0.7241 | 0.6155 | 0.7845 |
| 0.4258 | 6.1667 | 592 | 0.5836 | 0.7085 | 0.5836 | 0.7639 |
| 0.4258 | 6.1875 | 594 | 0.5805 | 0.7065 | 0.5805 | 0.7619 |
| 0.4258 | 6.2083 | 596 | 0.5951 | 0.7065 | 0.5951 | 0.7714 |
| 0.4258 | 6.2292 | 598 | 0.6400 | 0.7065 | 0.6400 | 0.8000 |
| 0.4258 | 6.25 | 600 | 0.7166 | 0.7137 | 0.7166 | 0.8465 |
| 0.4258 | 6.2708 | 602 | 0.7661 | 0.7137 | 0.7661 | 0.8753 |
| 0.4258 | 6.2917 | 604 | 0.7850 | 0.7137 | 0.7850 | 0.8860 |
| 0.4258 | 6.3125 | 606 | 0.8066 | 0.6881 | 0.8066 | 0.8981 |
| 0.4258 | 6.3333 | 608 | 0.8022 | 0.6603 | 0.8022 | 0.8957 |
| 0.4258 | 6.3542 | 610 | 0.7678 | 0.6603 | 0.7678 | 0.8762 |
| 0.4258 | 6.375 | 612 | 0.6610 | 0.7241 | 0.6610 | 0.8130 |
| 0.4258 | 6.3958 | 614 | 0.5812 | 0.7241 | 0.5812 | 0.7623 |
| 0.4258 | 6.4167 | 616 | 0.5526 | 0.7418 | 0.5526 | 0.7434 |
| 0.4258 | 6.4375 | 618 | 0.5606 | 0.7418 | 0.5606 | 0.7488 |
| 0.4258 | 6.4583 | 620 | 0.6176 | 0.6934 | 0.6176 | 0.7859 |
| 0.4258 | 6.4792 | 622 | 0.6863 | 0.6606 | 0.6863 | 0.8284 |
| 0.4258 | 6.5 | 624 | 0.7718 | 0.6603 | 0.7718 | 0.8785 |
| 0.4258 | 6.5208 | 626 | 0.8033 | 0.6603 | 0.8033 | 0.8963 |
| 0.4258 | 6.5417 | 628 | 0.8052 | 0.6603 | 0.8052 | 0.8973 |
| 0.4258 | 6.5625 | 630 | 0.7732 | 0.7160 | 0.7732 | 0.8793 |
| 0.4258 | 6.5833 | 632 | 0.7195 | 0.7160 | 0.7195 | 0.8483 |
| 0.4258 | 6.6042 | 634 | 0.6536 | 0.7198 | 0.6536 | 0.8084 |
| 0.4258 | 6.625 | 636 | 0.6260 | 0.7418 | 0.6260 | 0.7912 |
| 0.4258 | 6.6458 | 638 | 0.6552 | 0.7210 | 0.6552 | 0.8094 |
| 0.4258 | 6.6667 | 640 | 0.7220 | 0.6617 | 0.7220 | 0.8497 |
| 0.4258 | 6.6875 | 642 | 0.7673 | 0.6614 | 0.7673 | 0.8760 |
| 0.4258 | 6.7083 | 644 | 0.7638 | 0.6617 | 0.7638 | 0.8740 |
| 0.4258 | 6.7292 | 646 | 0.7532 | 0.6617 | 0.7532 | 0.8679 |
| 0.4258 | 6.75 | 648 | 0.7425 | 0.6610 | 0.7425 | 0.8617 |
| 0.4258 | 6.7708 | 650 | 0.7150 | 0.6610 | 0.7150 | 0.8456 |
| 0.4258 | 6.7917 | 652 | 0.6790 | 0.6610 | 0.6790 | 0.8240 |
| 0.4258 | 6.8125 | 654 | 0.6426 | 0.6934 | 0.6426 | 0.8016 |
| 0.4258 | 6.8333 | 656 | 0.6548 | 0.6934 | 0.6548 | 0.8092 |
| 0.4258 | 6.8542 | 658 | 0.7038 | 0.6610 | 0.7038 | 0.8389 |
| 0.4258 | 6.875 | 660 | 0.7891 | 0.6603 | 0.7891 | 0.8883 |
| 0.4258 | 6.8958 | 662 | 0.8684 | 0.6603 | 0.8684 | 0.9319 |
| 0.4258 | 6.9167 | 664 | 0.8899 | 0.5910 | 0.8899 | 0.9433 |
| 0.4258 | 6.9375 | 666 | 0.8651 | 0.5910 | 0.8651 | 0.9301 |
| 0.4258 | 6.9583 | 668 | 0.8118 | 0.6606 | 0.8118 | 0.9010 |
| 0.4258 | 6.9792 | 670 | 0.7616 | 0.6610 | 0.7616 | 0.8727 |
| 0.4258 | 7.0 | 672 | 0.7359 | 0.6782 | 0.7359 | 0.8578 |
| 0.4258 | 7.0208 | 674 | 0.7429 | 0.6782 | 0.7429 | 0.8619 |
| 0.4258 | 7.0417 | 676 | 0.7910 | 0.6606 | 0.7910 | 0.8894 |
| 0.4258 | 7.0625 | 678 | 0.8425 | 0.6458 | 0.8425 | 0.9179 |
| 0.4258 | 7.0833 | 680 | 0.8902 | 0.6458 | 0.8902 | 0.9435 |
| 0.4258 | 7.1042 | 682 | 0.8848 | 0.6458 | 0.8848 | 0.9406 |
| 0.4258 | 7.125 | 684 | 0.8453 | 0.6603 | 0.8453 | 0.9194 |
| 0.4258 | 7.1458 | 686 | 0.7884 | 0.6606 | 0.7884 | 0.8879 |
| 0.4258 | 7.1667 | 688 | 0.7366 | 0.6767 | 0.7366 | 0.8582 |
| 0.4258 | 7.1875 | 690 | 0.7233 | 0.6767 | 0.7233 | 0.8505 |
| 0.4258 | 7.2083 | 692 | 0.7232 | 0.7072 | 0.7232 | 0.8504 |
| 0.4258 | 7.2292 | 694 | 0.7318 | 0.6767 | 0.7318 | 0.8554 |
| 0.4258 | 7.25 | 696 | 0.7541 | 0.6767 | 0.7541 | 0.8684 |
| 0.4258 | 7.2708 | 698 | 0.7625 | 0.6606 | 0.7625 | 0.8732 |
| 0.4258 | 7.2917 | 700 | 0.8060 | 0.6606 | 0.8060 | 0.8977 |
| 0.4258 | 7.3125 | 702 | 0.8239 | 0.6606 | 0.8239 | 0.9077 |
| 0.4258 | 7.3333 | 704 | 0.7961 | 0.6606 | 0.7961 | 0.8923 |
| 0.4258 | 7.3542 | 706 | 0.7436 | 0.6606 | 0.7436 | 0.8623 |
| 0.4258 | 7.375 | 708 | 0.6975 | 0.7072 | 0.6975 | 0.8351 |
| 0.4258 | 7.3958 | 710 | 0.6810 | 0.7363 | 0.6810 | 0.8252 |
| 0.4258 | 7.4167 | 712 | 0.6808 | 0.7363 | 0.6808 | 0.8251 |
| 0.4258 | 7.4375 | 714 | 0.6950 | 0.7363 | 0.6950 | 0.8336 |
| 0.4258 | 7.4583 | 716 | 0.7304 | 0.7363 | 0.7304 | 0.8546 |
| 0.4258 | 7.4792 | 718 | 0.7531 | 0.7198 | 0.7531 | 0.8678 |
| 0.4258 | 7.5 | 720 | 0.7652 | 0.7198 | 0.7652 | 0.8747 |
| 0.4258 | 7.5208 | 722 | 0.7828 | 0.6909 | 0.7828 | 0.8847 |
| 0.4258 | 7.5417 | 724 | 0.7833 | 0.6909 | 0.7833 | 0.8851 |
| 0.4258 | 7.5625 | 726 | 0.8063 | 0.6603 | 0.8063 | 0.8979 |
| 0.4258 | 7.5833 | 728 | 0.8239 | 0.6603 | 0.8239 | 0.9077 |
| 0.4258 | 7.6042 | 730 | 0.8208 | 0.6603 | 0.8208 | 0.9060 |
| 0.4258 | 7.625 | 732 | 0.7921 | 0.6606 | 0.7921 | 0.8900 |
| 0.4258 | 7.6458 | 734 | 0.7691 | 0.6606 | 0.7691 | 0.8770 |
| 0.4258 | 7.6667 | 736 | 0.7463 | 0.6610 | 0.7463 | 0.8639 |
| 0.4258 | 7.6875 | 738 | 0.6998 | 0.6610 | 0.6998 | 0.8365 |
| 0.4258 | 7.7083 | 740 | 0.6650 | 0.6782 | 0.6650 | 0.8155 |
| 0.4258 | 7.7292 | 742 | 0.6542 | 0.6782 | 0.6542 | 0.8088 |
| 0.4258 | 7.75 | 744 | 0.6743 | 0.6782 | 0.6743 | 0.8212 |
| 0.4258 | 7.7708 | 746 | 0.7268 | 0.6610 | 0.7268 | 0.8525 |
| 0.4258 | 7.7917 | 748 | 0.7923 | 0.6452 | 0.7923 | 0.8901 |
| 0.4258 | 7.8125 | 750 | 0.8139 | 0.6458 | 0.8139 | 0.9022 |
| 0.4258 | 7.8333 | 752 | 0.7976 | 0.6603 | 0.7976 | 0.8931 |
| 0.4258 | 7.8542 | 754 | 0.7463 | 0.6606 | 0.7463 | 0.8639 |
| 0.4258 | 7.875 | 756 | 0.6859 | 0.6782 | 0.6859 | 0.8282 |
| 0.4258 | 7.8958 | 758 | 0.6570 | 0.7418 | 0.6570 | 0.8105 |
| 0.4258 | 7.9167 | 760 | 0.6559 | 0.7418 | 0.6559 | 0.8099 |
| 0.4258 | 7.9375 | 762 | 0.6716 | 0.7108 | 0.6716 | 0.8195 |
| 0.4258 | 7.9583 | 764 | 0.7082 | 0.6782 | 0.7082 | 0.8415 |
| 0.4258 | 7.9792 | 766 | 0.7689 | 0.6603 | 0.7689 | 0.8769 |
| 0.4258 | 8.0 | 768 | 0.8448 | 0.6458 | 0.8448 | 0.9191 |
| 0.4258 | 8.0208 | 770 | 0.9078 | 0.6458 | 0.9078 | 0.9528 |
| 0.4258 | 8.0417 | 772 | 0.9100 | 0.6458 | 0.9100 | 0.9539 |
| 0.4258 | 8.0625 | 774 | 0.8669 | 0.6458 | 0.8669 | 0.9311 |
| 0.4258 | 8.0833 | 776 | 0.8157 | 0.6458 | 0.8157 | 0.9032 |
| 0.4258 | 8.1042 | 778 | 0.7716 | 0.6603 | 0.7716 | 0.8784 |
| 0.4258 | 8.125 | 780 | 0.7585 | 0.6753 | 0.7585 | 0.8709 |
| 0.4258 | 8.1458 | 782 | 0.7385 | 0.6753 | 0.7385 | 0.8594 |
| 0.4258 | 8.1667 | 784 | 0.7352 | 0.6753 | 0.7352 | 0.8574 |
| 0.4258 | 8.1875 | 786 | 0.7384 | 0.6753 | 0.7384 | 0.8593 |
| 0.4258 | 8.2083 | 788 | 0.7703 | 0.6753 | 0.7703 | 0.8777 |
| 0.4258 | 8.2292 | 790 | 0.8074 | 0.6458 | 0.8074 | 0.8985 |
| 0.4258 | 8.25 | 792 | 0.8309 | 0.6458 | 0.8309 | 0.9116 |
| 0.4258 | 8.2708 | 794 | 0.8246 | 0.6458 | 0.8246 | 0.9081 |
| 0.4258 | 8.2917 | 796 | 0.8430 | 0.6458 | 0.8430 | 0.9182 |
| 0.4258 | 8.3125 | 798 | 0.8447 | 0.6458 | 0.8447 | 0.9191 |
| 0.4258 | 8.3333 | 800 | 0.8614 | 0.6458 | 0.8614 | 0.9281 |
| 0.4258 | 8.3542 | 802 | 0.8635 | 0.6458 | 0.8635 | 0.9292 |
| 0.4258 | 8.375 | 804 | 0.8471 | 0.6458 | 0.8471 | 0.9204 |
| 0.4258 | 8.3958 | 806 | 0.8217 | 0.6458 | 0.8217 | 0.9065 |
| 0.4258 | 8.4167 | 808 | 0.7883 | 0.6753 | 0.7883 | 0.8879 |
| 0.4258 | 8.4375 | 810 | 0.7547 | 0.6753 | 0.7547 | 0.8688 |
| 0.4258 | 8.4583 | 812 | 0.7345 | 0.6767 | 0.7345 | 0.8570 |
| 0.4258 | 8.4792 | 814 | 0.7206 | 0.6767 | 0.7206 | 0.8489 |
| 0.4258 | 8.5 | 816 | 0.7277 | 0.6753 | 0.7277 | 0.8531 |
| 0.4258 | 8.5208 | 818 | 0.7548 | 0.6753 | 0.7548 | 0.8688 |
| 0.4258 | 8.5417 | 820 | 0.7897 | 0.6753 | 0.7897 | 0.8887 |
| 0.4258 | 8.5625 | 822 | 0.8130 | 0.6753 | 0.8130 | 0.9017 |
| 0.4258 | 8.5833 | 824 | 0.8270 | 0.6603 | 0.8270 | 0.9094 |
| 0.4258 | 8.6042 | 826 | 0.8216 | 0.6603 | 0.8216 | 0.9064 |
| 0.4258 | 8.625 | 828 | 0.8166 | 0.6603 | 0.8166 | 0.9037 |
| 0.4258 | 8.6458 | 830 | 0.8201 | 0.6603 | 0.8201 | 0.9056 |
| 0.4258 | 8.6667 | 832 | 0.8188 | 0.6603 | 0.8188 | 0.9049 |
| 0.4258 | 8.6875 | 834 | 0.8375 | 0.6458 | 0.8375 | 0.9152 |
| 0.4258 | 8.7083 | 836 | 0.8483 | 0.6458 | 0.8483 | 0.9210 |
| 0.4258 | 8.7292 | 838 | 0.8309 | 0.6458 | 0.8309 | 0.9115 |
| 0.4258 | 8.75 | 840 | 0.8061 | 0.6458 | 0.8061 | 0.8978 |
| 0.4258 | 8.7708 | 842 | 0.7858 | 0.6458 | 0.7858 | 0.8864 |
| 0.4258 | 8.7917 | 844 | 0.7847 | 0.6458 | 0.7847 | 0.8858 |
| 0.4258 | 8.8125 | 846 | 0.7822 | 0.6458 | 0.7822 | 0.8844 |
| 0.4258 | 8.8333 | 848 | 0.7619 | 0.6452 | 0.7619 | 0.8729 |
| 0.4258 | 8.8542 | 850 | 0.7400 | 0.6782 | 0.7400 | 0.8602 |
| 0.4258 | 8.875 | 852 | 0.7075 | 0.6782 | 0.7075 | 0.8411 |
| 0.4258 | 8.8958 | 854 | 0.6863 | 0.6782 | 0.6863 | 0.8285 |
| 0.4258 | 8.9167 | 856 | 0.6823 | 0.6782 | 0.6823 | 0.8260 |
| 0.4258 | 8.9375 | 858 | 0.6910 | 0.6782 | 0.6910 | 0.8313 |
| 0.4258 | 8.9583 | 860 | 0.7012 | 0.6782 | 0.7012 | 0.8374 |
| 0.4258 | 8.9792 | 862 | 0.7076 | 0.6782 | 0.7076 | 0.8412 |
| 0.4258 | 9.0 | 864 | 0.7193 | 0.6782 | 0.7193 | 0.8481 |
| 0.4258 | 9.0208 | 866 | 0.7220 | 0.6782 | 0.7220 | 0.8497 |
| 0.4258 | 9.0417 | 868 | 0.7342 | 0.6606 | 0.7342 | 0.8568 |
| 0.4258 | 9.0625 | 870 | 0.7377 | 0.6606 | 0.7377 | 0.8589 |
| 0.4258 | 9.0833 | 872 | 0.7369 | 0.6606 | 0.7369 | 0.8584 |
| 0.4258 | 9.1042 | 874 | 0.7266 | 0.6782 | 0.7266 | 0.8524 |
| 0.4258 | 9.125 | 876 | 0.7111 | 0.6782 | 0.7111 | 0.8433 |
| 0.4258 | 9.1458 | 878 | 0.7010 | 0.6782 | 0.7010 | 0.8373 |
| 0.4258 | 9.1667 | 880 | 0.7031 | 0.6782 | 0.7031 | 0.8385 |
| 0.4258 | 9.1875 | 882 | 0.7120 | 0.6782 | 0.7120 | 0.8438 |
| 0.4258 | 9.2083 | 884 | 0.7205 | 0.6782 | 0.7205 | 0.8488 |
| 0.4258 | 9.2292 | 886 | 0.7277 | 0.6767 | 0.7277 | 0.8530 |
| 0.4258 | 9.25 | 888 | 0.7423 | 0.6753 | 0.7423 | 0.8615 |
| 0.4258 | 9.2708 | 890 | 0.7623 | 0.6603 | 0.7623 | 0.8731 |
| 0.4258 | 9.2917 | 892 | 0.7783 | 0.6603 | 0.7783 | 0.8822 |
| 0.4258 | 9.3125 | 894 | 0.7896 | 0.6603 | 0.7896 | 0.8886 |
| 0.4258 | 9.3333 | 896 | 0.7969 | 0.6603 | 0.7969 | 0.8927 |
| 0.4258 | 9.3542 | 898 | 0.7903 | 0.6603 | 0.7903 | 0.8890 |
| 0.4258 | 9.375 | 900 | 0.7789 | 0.6603 | 0.7789 | 0.8825 |
| 0.4258 | 9.3958 | 902 | 0.7627 | 0.6888 | 0.7627 | 0.8733 |
| 0.4258 | 9.4167 | 904 | 0.7530 | 0.7040 | 0.7530 | 0.8678 |
| 0.4258 | 9.4375 | 906 | 0.7387 | 0.7314 | 0.7387 | 0.8595 |
| 0.4258 | 9.4583 | 908 | 0.7255 | 0.7363 | 0.7255 | 0.8518 |
| 0.4258 | 9.4792 | 910 | 0.7145 | 0.7363 | 0.7145 | 0.8453 |
| 0.4258 | 9.5 | 912 | 0.7017 | 0.7418 | 0.7017 | 0.8377 |
| 0.4258 | 9.5208 | 914 | 0.6978 | 0.7418 | 0.6978 | 0.8353 |
| 0.4258 | 9.5417 | 916 | 0.6934 | 0.7418 | 0.6934 | 0.8327 |
| 0.4258 | 9.5625 | 918 | 0.6919 | 0.7418 | 0.6919 | 0.8318 |
| 0.4258 | 9.5833 | 920 | 0.6870 | 0.7418 | 0.6870 | 0.8288 |
| 0.4258 | 9.6042 | 922 | 0.6829 | 0.7418 | 0.6829 | 0.8264 |
| 0.4258 | 9.625 | 924 | 0.6822 | 0.7418 | 0.6822 | 0.8259 |
| 0.4258 | 9.6458 | 926 | 0.6808 | 0.7418 | 0.6808 | 0.8251 |
| 0.4258 | 9.6667 | 928 | 0.6808 | 0.7418 | 0.6808 | 0.8251 |
| 0.4258 | 9.6875 | 930 | 0.6806 | 0.7418 | 0.6806 | 0.8250 |
| 0.4258 | 9.7083 | 932 | 0.6830 | 0.7418 | 0.6830 | 0.8264 |
| 0.4258 | 9.7292 | 934 | 0.6852 | 0.7418 | 0.6852 | 0.8278 |
| 0.4258 | 9.75 | 936 | 0.6894 | 0.7418 | 0.6894 | 0.8303 |
| 0.4258 | 9.7708 | 938 | 0.6944 | 0.7418 | 0.6944 | 0.8333 |
| 0.4258 | 9.7917 | 940 | 0.7006 | 0.7418 | 0.7006 | 0.8370 |
| 0.4258 | 9.8125 | 942 | 0.7064 | 0.7418 | 0.7064 | 0.8405 |
| 0.4258 | 9.8333 | 944 | 0.7108 | 0.7418 | 0.7108 | 0.8431 |
| 0.4258 | 9.8542 | 946 | 0.7134 | 0.7418 | 0.7134 | 0.8446 |
| 0.4258 | 9.875 | 948 | 0.7162 | 0.7418 | 0.7162 | 0.8463 |
| 0.4258 | 9.8958 | 950 | 0.7169 | 0.7418 | 0.7169 | 0.8467 |
| 0.4258 | 9.9167 | 952 | 0.7180 | 0.7418 | 0.7180 | 0.8473 |
| 0.4258 | 9.9375 | 954 | 0.7188 | 0.7418 | 0.7188 | 0.8478 |
| 0.4258 | 9.9583 | 956 | 0.7192 | 0.7418 | 0.7192 | 0.8481 |
| 0.4258 | 9.9792 | 958 | 0.7191 | 0.7418 | 0.7191 | 0.8480 |
| 0.4258 | 10.0 | 960 | 0.7190 | 0.7418 | 0.7190 | 0.8479 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
camidenecken/RoBERTa-RM1-v2-4-rm-v27 | camidenecken | 2024-11-25T16:07:46Z | 189 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T16:07:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leguigou/lisanna-kruus-pro | leguigou | 2024-11-25T16:04:31Z | 118 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-25T16:04:23Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/lisanna-kruus-pro_003000_00_20241125163536.png
text: photo portrait of a woman in light, dark background, makeup, smiling
- output:
url: sample/lisanna-kruus-pro_003000_01_20241125163635.png
text: photo of woman in red dress, cleavage
- output:
url: sample/lisanna-kruus-pro_003000_02_20241125163734.png
text: photo of woman standing outdoor in forest
- output:
url: sample/lisanna-kruus-pro_003000_03_20241125163833.png
text: photo of woman standing and smiling in bikini on beach
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Lisanna Kruus Pro
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Sugyeong/mistral_moe_original_8 | Sugyeong | 2024-11-25T16:01:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:55:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jimmy1229/opt-125m-gptq-4bit | Jimmy1229 | 2024-11-25T16:00:29Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-11-25T16:00:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mohit1Kulkarni/Florence-2-FT-CarDamage | Mohit1Kulkarni | 2024-11-25T16:00:17Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-11-25T15:42:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camidenecken/RoBERTa-RM1-v2-4-rm-v23 | camidenecken | 2024-11-25T15:58:40Z | 186 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:58:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
K-Monitor/kmdb_classification_category_v3 | K-Monitor | 2024-11-25T15:58:39Z | 185 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:58:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maxg73872/bert-large-uncased-finetuned-medmcqa-2024-11-25-T15-46-18 | maxg73872 | 2024-11-25T15:58:11Z | 158 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-25T15:57:17Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-finetuned-medmcqa-2024-11-25-T15-46-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-medmcqa-2024-11-25-T15-46-18
This model is a fine-tuned version of [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3951
- Accuracy: 0.4524
- F1: 0.4646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 1.3793 | 0.9978 | 57 | 1.3951 | 0.4524 | 0.4646 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ElMad/melodic-bee-938 | ElMad | 2024-11-25T15:56:18Z | 148 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:55:33Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
model-index:
- name: melodic-bee-938
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# melodic-bee-938
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3387
- Hamming Loss: 0.1123
- Zero One Loss: 1.0
- Jaccard Score: 1.0
- Hamming Loss Optimised: 0.1123
- Hamming Loss Threshold: 0.9000
- Zero One Loss Optimised: 1.0
- Zero One Loss Threshold: 0.9000
- Jaccard Score Optimised: 1.0
- Jaccard Score Threshold: 0.9000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0011128424281972827
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|
| 0.3511 | 1.0 | 100 | 0.3435 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3408 | 2.0 | 200 | 0.3418 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3393 | 3.0 | 300 | 0.3440 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3377 | 4.0 | 400 | 0.3395 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3363 | 5.0 | 500 | 0.3408 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3362 | 6.0 | 600 | 0.3397 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3349 | 7.0 | 700 | 0.3420 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.334 | 8.0 | 800 | 0.3397 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3337 | 9.0 | 900 | 0.3387 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
camidenecken/RoBERTa-RM1-v2-4-rm-v21 | camidenecken | 2024-11-25T15:54:16Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:53:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/Qwen2.5-7B-HomerCreative-Mix-GGUF | QuantFactory | 2024-11-25T15:50:11Z | 106 | 3 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"creative",
"roleplay",
"instruct",
"qwen",
"model_stock",
"bfloat16",
"en",
"arxiv:2403.19522",
"base_model:allknowingroger/HomerSlerp1-7B",
"base_model:merge:allknowingroger/HomerSlerp1-7B",
"base_model:bunnycore/Qandora-2.5-7B-Creative",
"base_model:merge:bunnycore/Qandora-2.5-7B-Creative",
"base_model:bunnycore/Qwen2.5-7B-Instruct-Fusion",
"base_model:merge:bunnycore/Qwen2.5-7B-Instruct-Fusion",
"base_model:newsbang/Homer-v0.5-Qwen2.5-7B",
"base_model:merge:newsbang/Homer-v0.5-Qwen2.5-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T13:35:47Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- mergekit
- lazymergekit
- creative
- roleplay
- instruct
- qwen
- model_stock
- bfloat16
base_model:
- newsbang/Homer-v0.5-Qwen2.5-7B
- allknowingroger/HomerSlerp1-7B
- bunnycore/Qwen2.5-7B-Instruct-Fusion
- bunnycore/Qandora-2.5-7B-Creative
model-index:
- name: Qwen2.5-7B-HomerCreative-Mix
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 78.35
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 36.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 32.33
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.6
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.77
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.3
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
name: Open LLM Leaderboard
---
[](https://hf.co/QuantFactory)
# QuantFactory/Qwen2.5-7B-HomerCreative-Mix-GGUF
This is quantized version of [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix) created using llama.cpp
# Original Model Card
# ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
**ZeroXClem/Qwen2.5-7B-HomerCreative-Mix** is an advanced language model meticulously crafted by merging four pre-trained models using the powerful [mergekit](https://github.com/cg123/mergekit) framework. This fusion leverages the **Model Stock** merge method to combine the creative prowess of **Qandora**, the instructive capabilities of **Qwen-Instruct-Fusion**, the sophisticated blending of **HomerSlerp1**, and the foundational conversational strengths of **Homer-v0.5-Qwen2.5-7B**. The resulting model excels in creative text generation, contextual understanding, and dynamic conversational interactions.
## π Merged Models
This model merge incorporates the following:
- [**bunnycore/Qandora-2.5-7B-Creative**](https://huggingface.co/bunnycore/Qandora-2.5-7B-Creative): Specializes in creative text generation, enhancing the model's ability to produce imaginative and diverse content.
- [**bunnycore/Qwen2.5-7B-Instruct-Fusion**](https://huggingface.co/bunnycore/Qwen2.5-7B-Instruct-Fusion): Focuses on instruction-following capabilities, improving the model's performance in understanding and executing user commands.
- [**allknowingroger/HomerSlerp1-7B**](https://huggingface.co/allknowingroger/HomerSlerp1-7B): Utilizes spherical linear interpolation (SLERP) to blend model weights smoothly, ensuring a harmonious integration of different model attributes.
- [**newsbang/Homer-v0.5-Qwen2.5-7B**](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B): Acts as the foundational conversational model, providing robust language comprehension and generation capabilities.
## π§© Merge Configuration
The configuration below outlines how the models are merged using the **Model Stock** method. This approach ensures a balanced and effective integration of the unique strengths from each source model.
```yaml
# Merge configuration for ZeroXClem/Qwen2.5-7B-HomerCreative-Mix using Model Stock
models:
- model: bunnycore/Qandora-2.5-7B-Creative
- model: bunnycore/Qwen2.5-7B-Instruct-Fusion
- model: allknowingroger/HomerSlerp1-7B
merge_method: model_stock
base_model: newsbang/Homer-v0.5-Qwen2.5-7B
normalize: false
int8_mask: true
dtype: bfloat16
```
### Key Parameters
- **Merge Method (`merge_method`):** Utilizes the **Model Stock** method, as described in [Model Stock](https://arxiv.org/abs/2403.19522), to effectively combine multiple models by leveraging their strengths.
- **Models (`models`):** Specifies the list of models to be merged:
- **bunnycore/Qandora-2.5-7B-Creative:** Enhances creative text generation.
- **bunnycore/Qwen2.5-7B-Instruct-Fusion:** Improves instruction-following capabilities.
- **allknowingroger/HomerSlerp1-7B:** Facilitates smooth blending of model weights using SLERP.
- **Base Model (`base_model`):** Defines the foundational model for the merge, which is **newsbang/Homer-v0.5-Qwen2.5-7B** in this case.
- **Normalization (`normalize`):** Set to `false` to retain the original scaling of the model weights during the merge.
- **INT8 Mask (`int8_mask`):** Enabled (`true`) to apply INT8 quantization masking, optimizing the model for efficient inference without significant loss in precision.
- **Data Type (`dtype`):** Uses `bfloat16` to maintain computational efficiency while ensuring high precision.
## π Performance Highlights
- **Creative Text Generation:** Enhanced ability to produce imaginative and diverse content suitable for creative writing, storytelling, and content creation.
- **Instruction Following:** Improved performance in understanding and executing user instructions, making the model more responsive and accurate in task execution.
- **Optimized Inference:** INT8 masking and `bfloat16` data type contribute to efficient computation, enabling faster response times without compromising quality.
## π― Use Case & Applications
**ZeroXClem/Qwen2.5-7B-HomerCreative-Mix** is designed to excel in environments that demand both creative generation and precise instruction following. Ideal applications include:
- **Creative Writing Assistance:** Aiding authors and content creators in generating imaginative narratives, dialogues, and descriptive text.
- **Interactive Storytelling and Role-Playing:** Enhancing dynamic and engaging interactions in role-playing games and interactive storytelling platforms.
- **Educational Tools and Tutoring Systems:** Providing detailed explanations, answering questions, and assisting in educational content creation with contextual understanding.
- **Technical Support and Customer Service:** Offering accurate and contextually relevant responses in technical support scenarios, improving user satisfaction.
- **Content Generation for Marketing:** Creating compelling and diverse marketing copy, social media posts, and promotional material with creative flair.
## π Usage
To utilize **ZeroXClem/Qwen2.5-7B-HomerCreative-Mix**, follow the steps below:
### Installation
First, install the necessary libraries:
```bash
pip install -qU transformers accelerate
```
### Example Code
Below is an example of how to load and use the model for text generation:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
# Define the model name
model_name = "ZeroXClem/Qwen2.5-7B-HomerCreative-Mix"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load the model
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Initialize the pipeline
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Define the input prompt
prompt = "Once upon a time in a land far, far away,"
# Generate the output
outputs = text_generator(
prompt,
max_new_tokens=150,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
# Print the generated text
print(outputs[0]["generated_text"])
```
### Notes
- **Fine-Tuning:** This merged model may require fine-tuning to optimize performance for specific applications or domains.
- **Resource Requirements:** Ensure that your environment has sufficient computational resources, especially GPU-enabled hardware, to handle the model efficiently during inference.
- **Customization:** Users can adjust parameters such as `temperature`, `top_k`, and `top_p` to control the creativity and diversity of the generated text.
## π License
This model is open-sourced under the **Apache-2.0 License**.
## π‘ Tags
- `merge`
- `mergekit`
- `model_stock`
- `Qwen`
- `Homer`
- `Creative`
- `ZeroXClem/Qwen2.5-7B-HomerCreative-Mix`
- `bunnycore/Qandora-2.5-7B-Creative`
- `bunnycore/Qwen2.5-7B-Instruct-Fusion`
- `allknowingroger/HomerSlerp1-7B`
- `newsbang/Homer-v0.5-Qwen2.5-7B`
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ZeroXClem__Qwen2.5-7B-HomerCreative-Mix)
| Metric |Value|
|-------------------|----:|
|Avg. |34.35|
|IFEval (0-Shot) |78.35|
|BBH (3-Shot) |36.77|
|MATH Lvl 5 (4-Shot)|32.33|
|GPQA (0-shot) | 6.60|
|MuSR (0-shot) |13.77|
|MMLU-PRO (5-shot) |38.30|
|
ElMad/bemused-trout-607 | ElMad | 2024-11-25T15:50:10Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:49:04Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: bemused-trout-607
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bemused-trout-607
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1783
- Hamming Loss: 0.0643
- Zero One Loss: 0.4113
- Jaccard Score: 0.3643
- Hamming Loss Optimised: 0.0615
- Hamming Loss Threshold: 0.7239
- Zero One Loss Optimised: 0.4038
- Zero One Loss Threshold: 0.4731
- Jaccard Score Optimised: 0.3281
- Jaccard Score Threshold: 0.2446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.0943791435964314e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|
| 0.2941 | 1.0 | 400 | 0.2355 | 0.0934 | 0.7987 | 0.7963 | 0.0929 | 0.6046 | 0.6738 | 0.2934 | 0.5524 | 0.2658 |
| 0.2247 | 2.0 | 800 | 0.2132 | 0.0914 | 0.6188 | 0.5905 | 0.0906 | 0.6229 | 0.6262 | 0.3893 | 0.4890 | 0.2889 |
| 0.187 | 3.0 | 1200 | 0.1854 | 0.066 | 0.4712 | 0.4224 | 0.0653 | 0.7034 | 0.4325 | 0.4451 | 0.3701 | 0.4026 |
| 0.1495 | 4.0 | 1600 | 0.1783 | 0.0643 | 0.4113 | 0.3643 | 0.0615 | 0.7239 | 0.4038 | 0.4731 | 0.3281 | 0.2446 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
camidenecken/RoBERTa-RM1-v2-4-rm-v19 | camidenecken | 2024-11-25T15:49:38Z | 145 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:49:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k15_task5_organization_fold1 | MayBashendy | 2024-11-25T15:49:22Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:41:54Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k15_task5_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k15_task5_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9927
- Qwk: 0.5878
- Mse: 0.9927
- Rmse: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0270 | 2 | 3.3470 | -0.1953 | 3.3470 | 1.8295 |
| No log | 0.0541 | 4 | 2.7794 | -0.3953 | 2.7794 | 1.6672 |
| No log | 0.0811 | 6 | 2.7418 | -0.2687 | 2.7418 | 1.6558 |
| No log | 0.1081 | 8 | 2.5369 | -0.2788 | 2.5369 | 1.5928 |
| No log | 0.1351 | 10 | 2.2401 | -0.2034 | 2.2401 | 1.4967 |
| No log | 0.1622 | 12 | 1.9704 | -0.3456 | 1.9704 | 1.4037 |
| No log | 0.1892 | 14 | 1.5833 | -0.2721 | 1.5833 | 1.2583 |
| No log | 0.2162 | 16 | 1.3369 | 0.0 | 1.3369 | 1.1562 |
| No log | 0.2432 | 18 | 1.2930 | 0.0 | 1.2930 | 1.1371 |
| No log | 0.2703 | 20 | 1.3110 | -0.0728 | 1.3110 | 1.1450 |
| No log | 0.2973 | 22 | 1.2759 | -0.0728 | 1.2759 | 1.1295 |
| No log | 0.3243 | 24 | 1.2541 | 0.0809 | 1.2541 | 1.1199 |
| No log | 0.3514 | 26 | 1.2153 | 0.0809 | 1.2153 | 1.1024 |
| No log | 0.3784 | 28 | 1.1684 | 0.2177 | 1.1684 | 1.0809 |
| No log | 0.4054 | 30 | 0.9885 | 0.2279 | 0.9885 | 0.9942 |
| No log | 0.4324 | 32 | 0.9191 | 0.1571 | 0.9191 | 0.9587 |
| No log | 0.4595 | 34 | 1.0098 | 0.2780 | 1.0098 | 1.0049 |
| No log | 0.4865 | 36 | 1.1475 | 0.1336 | 1.1475 | 1.0712 |
| No log | 0.5135 | 38 | 1.1826 | 0.1336 | 1.1826 | 1.0875 |
| No log | 0.5405 | 40 | 1.1218 | 0.375 | 1.1218 | 1.0591 |
| No log | 0.5676 | 42 | 1.0676 | 0.3610 | 1.0676 | 1.0332 |
| No log | 0.5946 | 44 | 0.9588 | 0.3235 | 0.9588 | 0.9792 |
| No log | 0.6216 | 46 | 0.9671 | 0.2368 | 0.9671 | 0.9834 |
| No log | 0.6486 | 48 | 1.0735 | 0.3803 | 1.0735 | 1.0361 |
| No log | 0.6757 | 50 | 1.1509 | 0.2330 | 1.1509 | 1.0728 |
| No log | 0.7027 | 52 | 1.1989 | 0.2978 | 1.1989 | 1.0950 |
| No log | 0.7297 | 54 | 1.0229 | 0.4203 | 1.0229 | 1.0114 |
| No log | 0.7568 | 56 | 0.9225 | 0.3949 | 0.9225 | 0.9604 |
| No log | 0.7838 | 58 | 0.8508 | 0.4684 | 0.8508 | 0.9224 |
| No log | 0.8108 | 60 | 0.8312 | 0.5161 | 0.8312 | 0.9117 |
| No log | 0.8378 | 62 | 0.9489 | 0.4780 | 0.9489 | 0.9741 |
| No log | 0.8649 | 64 | 1.0816 | 0.3598 | 1.0816 | 1.0400 |
| No log | 0.8919 | 66 | 1.1509 | 0.2898 | 1.1509 | 1.0728 |
| No log | 0.9189 | 68 | 1.0430 | 0.3003 | 1.0430 | 1.0213 |
| No log | 0.9459 | 70 | 0.8708 | 0.2933 | 0.8708 | 0.9332 |
| No log | 0.9730 | 72 | 0.8250 | 0.2933 | 0.8250 | 0.9083 |
| No log | 1.0 | 74 | 0.8420 | 0.2279 | 0.8420 | 0.9176 |
| No log | 1.0270 | 76 | 0.8842 | 0.2279 | 0.8842 | 0.9403 |
| No log | 1.0541 | 78 | 0.9016 | 0.2279 | 0.9016 | 0.9495 |
| No log | 1.0811 | 80 | 0.9029 | 0.2279 | 0.9029 | 0.9502 |
| No log | 1.1081 | 82 | 0.8987 | 0.2279 | 0.8987 | 0.9480 |
| No log | 1.1351 | 84 | 0.8682 | 0.2279 | 0.8682 | 0.9318 |
| No log | 1.1622 | 86 | 0.8270 | 0.2279 | 0.8270 | 0.9094 |
| No log | 1.1892 | 88 | 0.8041 | 0.2279 | 0.8041 | 0.8967 |
| No log | 1.2162 | 90 | 0.8034 | 0.3594 | 0.8034 | 0.8963 |
| No log | 1.2432 | 92 | 0.7690 | 0.3594 | 0.7690 | 0.8769 |
| No log | 1.2703 | 94 | 0.7330 | 0.4138 | 0.7330 | 0.8561 |
| No log | 1.2973 | 96 | 0.7400 | 0.6215 | 0.7400 | 0.8602 |
| No log | 1.3243 | 98 | 0.7590 | 0.5821 | 0.7590 | 0.8712 |
| No log | 1.3514 | 100 | 0.8597 | 0.5 | 0.8597 | 0.9272 |
| No log | 1.3784 | 102 | 0.9010 | 0.5 | 0.9010 | 0.9492 |
| No log | 1.4054 | 104 | 0.8493 | 0.5058 | 0.8493 | 0.9216 |
| No log | 1.4324 | 106 | 0.7849 | 0.4753 | 0.7849 | 0.8859 |
| No log | 1.4595 | 108 | 0.7557 | 0.5821 | 0.7557 | 0.8693 |
| No log | 1.4865 | 110 | 0.7907 | 0.6034 | 0.7907 | 0.8892 |
| No log | 1.5135 | 112 | 0.9324 | 0.5 | 0.9324 | 0.9656 |
| No log | 1.5405 | 114 | 0.9794 | 0.4070 | 0.9794 | 0.9896 |
| No log | 1.5676 | 116 | 0.8694 | 0.4901 | 0.8694 | 0.9324 |
| No log | 1.5946 | 118 | 0.6936 | 0.6012 | 0.6936 | 0.8328 |
| No log | 1.6216 | 120 | 0.6387 | 0.6012 | 0.6387 | 0.7992 |
| No log | 1.6486 | 122 | 0.6342 | 0.6012 | 0.6342 | 0.7964 |
| No log | 1.6757 | 124 | 0.6751 | 0.6012 | 0.6751 | 0.8216 |
| No log | 1.7027 | 126 | 0.8003 | 0.5058 | 0.8003 | 0.8946 |
| No log | 1.7297 | 128 | 0.9931 | 0.4334 | 0.9931 | 0.9965 |
| No log | 1.7568 | 130 | 1.0242 | 0.4334 | 1.0242 | 1.0120 |
| No log | 1.7838 | 132 | 1.0280 | 0.4199 | 1.0280 | 1.0139 |
| No log | 1.8108 | 134 | 0.9160 | 0.6163 | 0.9160 | 0.9571 |
| No log | 1.8378 | 136 | 0.8837 | 0.5704 | 0.8837 | 0.9401 |
| No log | 1.8649 | 138 | 0.8559 | 0.6509 | 0.8559 | 0.9252 |
| No log | 1.8919 | 140 | 0.9217 | 0.5325 | 0.9217 | 0.9601 |
| No log | 1.9189 | 142 | 0.9157 | 0.5300 | 0.9157 | 0.9569 |
| No log | 1.9459 | 144 | 0.7345 | 0.6163 | 0.7345 | 0.8570 |
| No log | 1.9730 | 146 | 0.6788 | 0.6034 | 0.6788 | 0.8239 |
| No log | 2.0 | 148 | 0.7720 | 0.6437 | 0.7720 | 0.8786 |
| No log | 2.0270 | 150 | 0.8465 | 0.5662 | 0.8465 | 0.9200 |
| No log | 2.0541 | 152 | 0.9842 | 0.5289 | 0.9842 | 0.9921 |
| No log | 2.0811 | 154 | 1.0334 | 0.5766 | 1.0334 | 1.0165 |
| No log | 2.1081 | 156 | 0.8911 | 0.5599 | 0.8911 | 0.9440 |
| No log | 2.1351 | 158 | 0.7304 | 0.6601 | 0.7304 | 0.8546 |
| No log | 2.1622 | 160 | 0.6709 | 0.5640 | 0.6709 | 0.8191 |
| No log | 2.1892 | 162 | 0.6482 | 0.5640 | 0.6482 | 0.8051 |
| No log | 2.2162 | 164 | 0.6600 | 0.5640 | 0.6600 | 0.8124 |
| No log | 2.2432 | 166 | 0.7428 | 0.6073 | 0.7428 | 0.8619 |
| No log | 2.2703 | 168 | 0.8597 | 0.5930 | 0.8597 | 0.9272 |
| No log | 2.2973 | 170 | 1.0576 | 0.5810 | 1.0576 | 1.0284 |
| No log | 2.3243 | 172 | 1.2791 | 0.5752 | 1.2791 | 1.1310 |
| No log | 2.3514 | 174 | 1.4347 | 0.4746 | 1.4347 | 1.1978 |
| No log | 2.3784 | 176 | 1.3490 | 0.5212 | 1.3490 | 1.1615 |
| No log | 2.4054 | 178 | 1.1040 | 0.5810 | 1.1040 | 1.0507 |
| No log | 2.4324 | 180 | 0.9291 | 0.5930 | 0.9291 | 0.9639 |
| No log | 2.4595 | 182 | 0.7842 | 0.6107 | 0.7842 | 0.8856 |
| No log | 2.4865 | 184 | 0.7277 | 0.6107 | 0.7277 | 0.8531 |
| No log | 2.5135 | 186 | 0.7255 | 0.5476 | 0.7255 | 0.8518 |
| No log | 2.5405 | 188 | 0.8013 | 0.5415 | 0.8013 | 0.8952 |
| No log | 2.5676 | 190 | 0.9826 | 0.5810 | 0.9826 | 0.9913 |
| No log | 2.5946 | 192 | 1.1326 | 0.5644 | 1.1326 | 1.0642 |
| No log | 2.6216 | 194 | 1.0953 | 0.5168 | 1.0953 | 1.0466 |
| No log | 2.6486 | 196 | 0.9240 | 0.5526 | 0.9240 | 0.9612 |
| No log | 2.6757 | 198 | 0.8639 | 0.5526 | 0.8639 | 0.9295 |
| No log | 2.7027 | 200 | 0.8575 | 0.5526 | 0.8575 | 0.9260 |
| No log | 2.7297 | 202 | 0.8920 | 0.5373 | 0.8920 | 0.9444 |
| No log | 2.7568 | 204 | 0.8756 | 0.4976 | 0.8756 | 0.9357 |
| No log | 2.7838 | 206 | 0.7547 | 0.5856 | 0.7547 | 0.8687 |
| No log | 2.8108 | 208 | 0.6556 | 0.6221 | 0.6556 | 0.8097 |
| No log | 2.8378 | 210 | 0.6032 | 0.6221 | 0.6032 | 0.7766 |
| No log | 2.8649 | 212 | 0.5555 | 0.6221 | 0.5555 | 0.7453 |
| No log | 2.8919 | 214 | 0.5262 | 0.6639 | 0.5262 | 0.7254 |
| No log | 2.9189 | 216 | 0.5161 | 0.6639 | 0.5161 | 0.7184 |
| No log | 2.9459 | 218 | 0.5585 | 0.6247 | 0.5585 | 0.7473 |
| No log | 2.9730 | 220 | 0.7422 | 0.6988 | 0.7422 | 0.8615 |
| No log | 3.0 | 222 | 1.0061 | 0.6694 | 1.0061 | 1.0031 |
| No log | 3.0270 | 224 | 1.1632 | 0.6226 | 1.1632 | 1.0785 |
| No log | 3.0541 | 226 | 1.0546 | 0.6318 | 1.0546 | 1.0269 |
| No log | 3.0811 | 228 | 0.8119 | 0.6162 | 0.8119 | 0.9011 |
| No log | 3.1081 | 230 | 0.6537 | 0.6783 | 0.6537 | 0.8085 |
| No log | 3.1351 | 232 | 0.6191 | 0.6615 | 0.6191 | 0.7869 |
| No log | 3.1622 | 234 | 0.5818 | 0.7150 | 0.5818 | 0.7628 |
| No log | 3.1892 | 236 | 0.5558 | 0.7122 | 0.5558 | 0.7455 |
| No log | 3.2162 | 238 | 0.5533 | 0.7291 | 0.5533 | 0.7438 |
| No log | 3.2432 | 240 | 0.6209 | 0.6753 | 0.6209 | 0.7880 |
| No log | 3.2703 | 242 | 0.8375 | 0.5446 | 0.8375 | 0.9151 |
| No log | 3.2973 | 244 | 1.0373 | 0.6239 | 1.0373 | 1.0185 |
| No log | 3.3243 | 246 | 1.0602 | 0.6254 | 1.0602 | 1.0297 |
| No log | 3.3514 | 248 | 0.9671 | 0.5446 | 0.9671 | 0.9834 |
| No log | 3.3784 | 250 | 0.8664 | 0.5446 | 0.8664 | 0.9308 |
| No log | 3.4054 | 252 | 0.8145 | 0.5446 | 0.8145 | 0.9025 |
| No log | 3.4324 | 254 | 0.8288 | 0.5446 | 0.8288 | 0.9104 |
| No log | 3.4595 | 256 | 0.9045 | 0.5363 | 0.9045 | 0.9510 |
| No log | 3.4865 | 258 | 0.8845 | 0.5363 | 0.8845 | 0.9405 |
| No log | 3.5135 | 260 | 0.8898 | 0.5363 | 0.8898 | 0.9433 |
| No log | 3.5405 | 262 | 0.8961 | 0.5363 | 0.8961 | 0.9466 |
| No log | 3.5676 | 264 | 0.9008 | 0.5363 | 0.9008 | 0.9491 |
| No log | 3.5946 | 266 | 0.9427 | 0.5363 | 0.9427 | 0.9709 |
| No log | 3.6216 | 268 | 0.9280 | 0.5363 | 0.9280 | 0.9633 |
| No log | 3.6486 | 270 | 0.9286 | 0.5810 | 0.9286 | 0.9636 |
| No log | 3.6757 | 272 | 0.8061 | 0.5930 | 0.8061 | 0.8978 |
| No log | 3.7027 | 274 | 0.7611 | 0.5463 | 0.7611 | 0.8724 |
| No log | 3.7297 | 276 | 0.7122 | 0.5872 | 0.7122 | 0.8439 |
| No log | 3.7568 | 278 | 0.7562 | 0.5756 | 0.7562 | 0.8696 |
| No log | 3.7838 | 280 | 0.8710 | 0.5463 | 0.8710 | 0.9333 |
| No log | 3.8108 | 282 | 1.1037 | 0.5863 | 1.1037 | 1.0506 |
| No log | 3.8378 | 284 | 1.3326 | 0.5897 | 1.3326 | 1.1544 |
| No log | 3.8649 | 286 | 1.2957 | 0.5363 | 1.2957 | 1.1383 |
| No log | 3.8919 | 288 | 1.0685 | 0.5363 | 1.0685 | 1.0337 |
| No log | 3.9189 | 290 | 0.9274 | 0.5662 | 0.9274 | 0.9630 |
| No log | 3.9459 | 292 | 0.9212 | 0.5662 | 0.9212 | 0.9598 |
| No log | 3.9730 | 294 | 0.9711 | 0.5736 | 0.9711 | 0.9854 |
| No log | 4.0 | 296 | 0.9768 | 0.5446 | 0.9768 | 0.9883 |
| No log | 4.0270 | 298 | 1.0068 | 0.5446 | 1.0068 | 1.0034 |
| No log | 4.0541 | 300 | 0.9004 | 0.5264 | 0.9004 | 0.9489 |
| No log | 4.0811 | 302 | 0.7511 | 0.6125 | 0.7511 | 0.8666 |
| No log | 4.1081 | 304 | 0.7107 | 0.6248 | 0.7107 | 0.8430 |
| No log | 4.1351 | 306 | 0.7506 | 0.6370 | 0.7506 | 0.8664 |
| No log | 4.1622 | 308 | 0.7991 | 0.6370 | 0.7991 | 0.8939 |
| No log | 4.1892 | 310 | 0.8414 | 0.6237 | 0.8414 | 0.9173 |
| No log | 4.2162 | 312 | 0.9419 | 0.6330 | 0.9419 | 0.9705 |
| No log | 4.2432 | 314 | 0.9711 | 0.6330 | 0.9711 | 0.9854 |
| No log | 4.2703 | 316 | 0.8804 | 0.6330 | 0.8804 | 0.9383 |
| No log | 4.2973 | 318 | 0.7657 | 0.6584 | 0.7657 | 0.8750 |
| No log | 4.3243 | 320 | 0.6735 | 0.6464 | 0.6735 | 0.8207 |
| No log | 4.3514 | 322 | 0.5964 | 0.6026 | 0.5964 | 0.7723 |
| No log | 4.3784 | 324 | 0.6142 | 0.6178 | 0.6142 | 0.7837 |
| No log | 4.4054 | 326 | 0.7272 | 0.6450 | 0.7272 | 0.8527 |
| No log | 4.4324 | 328 | 0.8405 | 0.6450 | 0.8405 | 0.9168 |
| No log | 4.4595 | 330 | 0.9530 | 0.6330 | 0.9530 | 0.9762 |
| No log | 4.4865 | 332 | 1.0030 | 0.6330 | 1.0030 | 1.0015 |
| No log | 4.5135 | 334 | 1.0420 | 0.6330 | 1.0420 | 1.0208 |
| No log | 4.5405 | 336 | 0.9613 | 0.6330 | 0.9613 | 0.9805 |
| No log | 4.5676 | 338 | 0.8981 | 0.6330 | 0.8981 | 0.9477 |
| No log | 4.5946 | 340 | 0.9123 | 0.6330 | 0.9123 | 0.9551 |
| No log | 4.6216 | 342 | 0.8979 | 0.6330 | 0.8979 | 0.9476 |
| No log | 4.6486 | 344 | 0.8198 | 0.6330 | 0.8198 | 0.9054 |
| No log | 4.6757 | 346 | 0.7930 | 0.6332 | 0.7930 | 0.8905 |
| No log | 4.7027 | 348 | 0.8255 | 0.6450 | 0.8255 | 0.9086 |
| No log | 4.7297 | 350 | 0.8268 | 0.6025 | 0.8268 | 0.9093 |
| No log | 4.7568 | 352 | 0.8005 | 0.6025 | 0.8005 | 0.8947 |
| No log | 4.7838 | 354 | 0.7290 | 0.6491 | 0.7290 | 0.8538 |
| No log | 4.8108 | 356 | 0.6279 | 0.6627 | 0.6279 | 0.7924 |
| No log | 4.8378 | 358 | 0.6226 | 0.6627 | 0.6226 | 0.7891 |
| No log | 4.8649 | 360 | 0.6636 | 0.6491 | 0.6636 | 0.8146 |
| No log | 4.8919 | 362 | 0.7200 | 0.6741 | 0.7200 | 0.8485 |
| No log | 4.9189 | 364 | 0.8600 | 0.6237 | 0.8600 | 0.9273 |
| No log | 4.9459 | 366 | 0.9321 | 0.6330 | 0.9321 | 0.9655 |
| No log | 4.9730 | 368 | 0.8970 | 0.6330 | 0.8970 | 0.9471 |
| No log | 5.0 | 370 | 0.8161 | 0.6450 | 0.8161 | 0.9034 |
| No log | 5.0270 | 372 | 0.6918 | 0.5800 | 0.6918 | 0.8318 |
| No log | 5.0541 | 374 | 0.6216 | 0.6767 | 0.6216 | 0.7884 |
| No log | 5.0811 | 376 | 0.6166 | 0.6617 | 0.6166 | 0.7852 |
| No log | 5.1081 | 378 | 0.6789 | 0.5800 | 0.6789 | 0.8240 |
| No log | 5.1351 | 380 | 0.8014 | 0.5800 | 0.8014 | 0.8952 |
| No log | 5.1622 | 382 | 0.9834 | 0.6583 | 0.9834 | 0.9917 |
| No log | 5.1892 | 384 | 1.1732 | 0.5588 | 1.1732 | 1.0832 |
| No log | 5.2162 | 386 | 1.2125 | 0.5588 | 1.2125 | 1.1011 |
| No log | 5.2432 | 388 | 1.1149 | 0.5810 | 1.1149 | 1.0559 |
| No log | 5.2703 | 390 | 1.0537 | 0.5810 | 1.0537 | 1.0265 |
| No log | 5.2973 | 392 | 1.0277 | 0.5717 | 1.0277 | 1.0137 |
| No log | 5.3243 | 394 | 0.9395 | 0.6288 | 0.9395 | 0.9693 |
| No log | 5.3514 | 396 | 0.9370 | 0.6288 | 0.9370 | 0.9680 |
| No log | 5.3784 | 398 | 1.0271 | 0.5717 | 1.0271 | 1.0135 |
| No log | 5.4054 | 400 | 1.1185 | 0.6169 | 1.1185 | 1.0576 |
| No log | 5.4324 | 402 | 1.1081 | 0.6169 | 1.1081 | 1.0527 |
| No log | 5.4595 | 404 | 0.9956 | 0.5717 | 0.9956 | 0.9978 |
| No log | 5.4865 | 406 | 0.8561 | 0.6288 | 0.8561 | 0.9253 |
| No log | 5.5135 | 408 | 0.7732 | 0.6288 | 0.7732 | 0.8793 |
| No log | 5.5405 | 410 | 0.7792 | 0.6162 | 0.7792 | 0.8827 |
| No log | 5.5676 | 412 | 0.8419 | 0.6288 | 0.8419 | 0.9176 |
| No log | 5.5946 | 414 | 0.9547 | 0.6169 | 0.9547 | 0.9771 |
| No log | 5.6216 | 416 | 1.0298 | 0.5810 | 1.0298 | 1.0148 |
| No log | 5.6486 | 418 | 1.0337 | 0.5810 | 1.0337 | 1.0167 |
| No log | 5.6757 | 420 | 1.0101 | 0.5810 | 1.0101 | 1.0050 |
| No log | 5.7027 | 422 | 0.9510 | 0.5810 | 0.9510 | 0.9752 |
| No log | 5.7297 | 424 | 0.8273 | 0.6205 | 0.8273 | 0.9096 |
| No log | 5.7568 | 426 | 0.7830 | 0.6162 | 0.7830 | 0.8849 |
| No log | 5.7838 | 428 | 0.7831 | 0.6162 | 0.7831 | 0.8850 |
| No log | 5.8108 | 430 | 0.7919 | 0.6162 | 0.7919 | 0.8899 |
| No log | 5.8378 | 432 | 0.8465 | 0.6586 | 0.8465 | 0.9200 |
| No log | 5.8649 | 434 | 0.8781 | 0.6586 | 0.8781 | 0.9371 |
| No log | 5.8919 | 436 | 0.9466 | 0.6586 | 0.9466 | 0.9729 |
| No log | 5.9189 | 438 | 0.9319 | 0.6586 | 0.9319 | 0.9653 |
| No log | 5.9459 | 440 | 0.8672 | 0.6162 | 0.8672 | 0.9312 |
| No log | 5.9730 | 442 | 0.8395 | 0.6162 | 0.8395 | 0.9162 |
| No log | 6.0 | 444 | 0.8291 | 0.6162 | 0.8291 | 0.9106 |
| No log | 6.0270 | 446 | 0.7757 | 0.6458 | 0.7757 | 0.8808 |
| No log | 6.0541 | 448 | 0.7536 | 0.6473 | 0.7536 | 0.8681 |
| No log | 6.0811 | 450 | 0.7428 | 0.6617 | 0.7428 | 0.8618 |
| No log | 6.1081 | 452 | 0.7761 | 0.6334 | 0.7761 | 0.8810 |
| No log | 6.1351 | 454 | 0.8072 | 0.6334 | 0.8072 | 0.8985 |
| No log | 6.1622 | 456 | 0.7792 | 0.6334 | 0.7792 | 0.8827 |
| No log | 6.1892 | 458 | 0.8000 | 0.6731 | 0.8000 | 0.8944 |
| No log | 6.2162 | 460 | 0.8477 | 0.6731 | 0.8477 | 0.9207 |
| No log | 6.2432 | 462 | 0.8624 | 0.6731 | 0.8624 | 0.9286 |
| No log | 6.2703 | 464 | 0.8912 | 0.6332 | 0.8912 | 0.9440 |
| No log | 6.2973 | 466 | 0.9355 | 0.5752 | 0.9355 | 0.9672 |
| No log | 6.3243 | 468 | 0.9129 | 0.6332 | 0.9129 | 0.9554 |
| No log | 6.3514 | 470 | 0.8952 | 0.6332 | 0.8952 | 0.9461 |
| No log | 6.3784 | 472 | 0.8866 | 0.6219 | 0.8866 | 0.9416 |
| No log | 6.4054 | 474 | 0.8528 | 0.6731 | 0.8528 | 0.9235 |
| No log | 6.4324 | 476 | 0.8190 | 0.6731 | 0.8190 | 0.9050 |
| No log | 6.4595 | 478 | 0.8209 | 0.6731 | 0.8209 | 0.9060 |
| No log | 6.4865 | 480 | 0.8911 | 0.6731 | 0.8911 | 0.9440 |
| No log | 6.5135 | 482 | 1.0100 | 0.5752 | 1.0100 | 1.0050 |
| No log | 6.5405 | 484 | 1.0937 | 0.5118 | 1.0937 | 1.0458 |
| No log | 6.5676 | 486 | 1.0894 | 0.6082 | 1.0894 | 1.0438 |
| No log | 6.5946 | 488 | 1.0247 | 0.6055 | 1.0247 | 1.0123 |
| No log | 6.6216 | 490 | 0.9509 | 0.5607 | 0.9509 | 0.9751 |
| No log | 6.6486 | 492 | 0.8651 | 0.5607 | 0.8651 | 0.9301 |
| No log | 6.6757 | 494 | 0.7804 | 0.6162 | 0.7804 | 0.8834 |
| No log | 6.7027 | 496 | 0.7404 | 0.6458 | 0.7404 | 0.8605 |
| No log | 6.7297 | 498 | 0.7270 | 0.6473 | 0.7270 | 0.8526 |
| 0.4266 | 6.7568 | 500 | 0.7548 | 0.6334 | 0.7548 | 0.8688 |
| 0.4266 | 6.7838 | 502 | 0.8234 | 0.6162 | 0.8234 | 0.9074 |
| 0.4266 | 6.8108 | 504 | 0.9579 | 0.5709 | 0.9579 | 0.9787 |
| 0.4266 | 6.8378 | 506 | 1.1293 | 0.5752 | 1.1293 | 1.0627 |
| 0.4266 | 6.8649 | 508 | 1.1793 | 0.5928 | 1.1793 | 1.0859 |
| 0.4266 | 6.8919 | 510 | 1.1351 | 0.5928 | 1.1351 | 1.0654 |
| 0.4266 | 6.9189 | 512 | 1.0025 | 0.5752 | 1.0025 | 1.0013 |
| 0.4266 | 6.9459 | 514 | 0.8682 | 0.6097 | 0.8682 | 0.9318 |
| 0.4266 | 6.9730 | 516 | 0.7882 | 0.6097 | 0.7882 | 0.8878 |
| 0.4266 | 7.0 | 518 | 0.7807 | 0.6097 | 0.7807 | 0.8836 |
| 0.4266 | 7.0270 | 520 | 0.7672 | 0.5678 | 0.7672 | 0.8759 |
| 0.4266 | 7.0541 | 522 | 0.8020 | 0.6097 | 0.8020 | 0.8955 |
| 0.4266 | 7.0811 | 524 | 0.8109 | 0.6097 | 0.8109 | 0.9005 |
| 0.4266 | 7.1081 | 526 | 0.8038 | 0.6097 | 0.8038 | 0.8965 |
| 0.4266 | 7.1351 | 528 | 0.8227 | 0.6097 | 0.8227 | 0.9070 |
| 0.4266 | 7.1622 | 530 | 0.8131 | 0.6097 | 0.8131 | 0.9017 |
| 0.4266 | 7.1892 | 532 | 0.7975 | 0.5678 | 0.7975 | 0.8930 |
| 0.4266 | 7.2162 | 534 | 0.8131 | 0.6097 | 0.8131 | 0.9017 |
| 0.4266 | 7.2432 | 536 | 0.8478 | 0.5612 | 0.8478 | 0.9207 |
| 0.4266 | 7.2703 | 538 | 0.8585 | 0.5612 | 0.8585 | 0.9265 |
| 0.4266 | 7.2973 | 540 | 0.8879 | 0.5709 | 0.8879 | 0.9423 |
| 0.4266 | 7.3243 | 542 | 0.8869 | 0.5612 | 0.8869 | 0.9418 |
| 0.4266 | 7.3514 | 544 | 0.8290 | 0.6357 | 0.8290 | 0.9105 |
| 0.4266 | 7.3784 | 546 | 0.7615 | 0.5962 | 0.7615 | 0.8726 |
| 0.4266 | 7.4054 | 548 | 0.7374 | 0.5962 | 0.7374 | 0.8587 |
| 0.4266 | 7.4324 | 550 | 0.7672 | 0.5962 | 0.7672 | 0.8759 |
| 0.4266 | 7.4595 | 552 | 0.7826 | 0.5962 | 0.7826 | 0.8847 |
| 0.4266 | 7.4865 | 554 | 0.7702 | 0.5962 | 0.7702 | 0.8776 |
| 0.4266 | 7.5135 | 556 | 0.7774 | 0.6071 | 0.7774 | 0.8817 |
| 0.4266 | 7.5405 | 558 | 0.7798 | 0.6071 | 0.7798 | 0.8831 |
| 0.4266 | 7.5676 | 560 | 0.7741 | 0.6458 | 0.7741 | 0.8798 |
| 0.4266 | 7.5946 | 562 | 0.7661 | 0.6458 | 0.7661 | 0.8753 |
| 0.4266 | 7.6216 | 564 | 0.7771 | 0.6458 | 0.7771 | 0.8815 |
| 0.4266 | 7.6486 | 566 | 0.8302 | 0.6071 | 0.8302 | 0.9112 |
| 0.4266 | 7.6757 | 568 | 0.8715 | 0.5783 | 0.8715 | 0.9336 |
| 0.4266 | 7.7027 | 570 | 0.8846 | 0.5266 | 0.8846 | 0.9405 |
| 0.4266 | 7.7297 | 572 | 0.8832 | 0.6071 | 0.8832 | 0.9398 |
| 0.4266 | 7.7568 | 574 | 0.8814 | 0.6071 | 0.8814 | 0.9388 |
| 0.4266 | 7.7838 | 576 | 0.8844 | 0.6071 | 0.8844 | 0.9404 |
| 0.4266 | 7.8108 | 578 | 0.8666 | 0.6071 | 0.8666 | 0.9309 |
| 0.4266 | 7.8378 | 580 | 0.8463 | 0.6071 | 0.8463 | 0.9199 |
| 0.4266 | 7.8649 | 582 | 0.8353 | 0.6071 | 0.8353 | 0.9140 |
| 0.4266 | 7.8919 | 584 | 0.8433 | 0.6071 | 0.8433 | 0.9183 |
| 0.4266 | 7.9189 | 586 | 0.8625 | 0.6071 | 0.8625 | 0.9287 |
| 0.4266 | 7.9459 | 588 | 0.8832 | 0.6468 | 0.8832 | 0.9398 |
| 0.4266 | 7.9730 | 590 | 0.9184 | 0.6468 | 0.9184 | 0.9583 |
| 0.4266 | 8.0 | 592 | 0.9172 | 0.6468 | 0.9172 | 0.9577 |
| 0.4266 | 8.0270 | 594 | 0.9094 | 0.6468 | 0.9094 | 0.9536 |
| 0.4266 | 8.0541 | 596 | 0.9169 | 0.6468 | 0.9169 | 0.9576 |
| 0.4266 | 8.0811 | 598 | 0.9145 | 0.5978 | 0.9145 | 0.9563 |
| 0.4266 | 8.1081 | 600 | 0.9173 | 0.5978 | 0.9173 | 0.9577 |
| 0.4266 | 8.1351 | 602 | 0.9256 | 0.5709 | 0.9256 | 0.9621 |
| 0.4266 | 8.1622 | 604 | 0.9153 | 0.5709 | 0.9153 | 0.9567 |
| 0.4266 | 8.1892 | 606 | 0.9113 | 0.5709 | 0.9113 | 0.9546 |
| 0.4266 | 8.2162 | 608 | 0.9307 | 0.5709 | 0.9307 | 0.9647 |
| 0.4266 | 8.2432 | 610 | 0.9471 | 0.5709 | 0.9471 | 0.9732 |
| 0.4266 | 8.2703 | 612 | 0.9163 | 0.6205 | 0.9163 | 0.9573 |
| 0.4266 | 8.2973 | 614 | 0.8723 | 0.6205 | 0.8723 | 0.9340 |
| 0.4266 | 8.3243 | 616 | 0.8632 | 0.6205 | 0.8632 | 0.9291 |
| 0.4266 | 8.3514 | 618 | 0.8633 | 0.6205 | 0.8633 | 0.9291 |
| 0.4266 | 8.3784 | 620 | 0.8649 | 0.6205 | 0.8649 | 0.9300 |
| 0.4266 | 8.4054 | 622 | 0.8930 | 0.6205 | 0.8930 | 0.9450 |
| 0.4266 | 8.4324 | 624 | 0.9453 | 0.6205 | 0.9453 | 0.9723 |
| 0.4266 | 8.4595 | 626 | 0.9992 | 0.6205 | 0.9992 | 0.9996 |
| 0.4266 | 8.4865 | 628 | 1.0558 | 0.5752 | 1.0558 | 1.0275 |
| 0.4266 | 8.5135 | 630 | 1.1213 | 0.5752 | 1.1213 | 1.0589 |
| 0.4266 | 8.5405 | 632 | 1.1629 | 0.5752 | 1.1629 | 1.0784 |
| 0.4266 | 8.5676 | 634 | 1.1592 | 0.5752 | 1.1592 | 1.0767 |
| 0.4266 | 8.5946 | 636 | 1.1555 | 0.5752 | 1.1555 | 1.0749 |
| 0.4266 | 8.6216 | 638 | 1.1152 | 0.5752 | 1.1152 | 1.0560 |
| 0.4266 | 8.6486 | 640 | 1.0720 | 0.5752 | 1.0720 | 1.0354 |
| 0.4266 | 8.6757 | 642 | 1.0413 | 0.5709 | 1.0413 | 1.0204 |
| 0.4266 | 8.7027 | 644 | 1.0188 | 0.5709 | 1.0188 | 1.0093 |
| 0.4266 | 8.7297 | 646 | 0.9983 | 0.5709 | 0.9983 | 0.9992 |
| 0.4266 | 8.7568 | 648 | 0.9572 | 0.5978 | 0.9572 | 0.9783 |
| 0.4266 | 8.7838 | 650 | 0.9193 | 0.5878 | 0.9193 | 0.9588 |
| 0.4266 | 8.8108 | 652 | 0.8702 | 0.5878 | 0.8702 | 0.9328 |
| 0.4266 | 8.8378 | 654 | 0.8227 | 0.6357 | 0.8227 | 0.9070 |
| 0.4266 | 8.8649 | 656 | 0.7797 | 0.6334 | 0.7797 | 0.8830 |
| 0.4266 | 8.8919 | 658 | 0.7511 | 0.6334 | 0.7511 | 0.8667 |
| 0.4266 | 8.9189 | 660 | 0.7385 | 0.6334 | 0.7385 | 0.8594 |
| 0.4266 | 8.9459 | 662 | 0.7455 | 0.6334 | 0.7455 | 0.8634 |
| 0.4266 | 8.9730 | 664 | 0.7659 | 0.5962 | 0.7659 | 0.8752 |
| 0.4266 | 9.0 | 666 | 0.7896 | 0.5962 | 0.7896 | 0.8886 |
| 0.4266 | 9.0270 | 668 | 0.8223 | 0.5878 | 0.8223 | 0.9068 |
| 0.4266 | 9.0541 | 670 | 0.8704 | 0.5878 | 0.8704 | 0.9329 |
| 0.4266 | 9.0811 | 672 | 0.9047 | 0.5878 | 0.9047 | 0.9511 |
| 0.4266 | 9.1081 | 674 | 0.9303 | 0.5878 | 0.9303 | 0.9645 |
| 0.4266 | 9.1351 | 676 | 0.9349 | 0.5878 | 0.9349 | 0.9669 |
| 0.4266 | 9.1622 | 678 | 0.9179 | 0.5878 | 0.9179 | 0.9581 |
| 0.4266 | 9.1892 | 680 | 0.8895 | 0.5878 | 0.8895 | 0.9431 |
| 0.4266 | 9.2162 | 682 | 0.8746 | 0.5878 | 0.8746 | 0.9352 |
| 0.4266 | 9.2432 | 684 | 0.8507 | 0.5878 | 0.8507 | 0.9224 |
| 0.4266 | 9.2703 | 686 | 0.8234 | 0.5878 | 0.8234 | 0.9074 |
| 0.4266 | 9.2973 | 688 | 0.8049 | 0.6357 | 0.8049 | 0.8972 |
| 0.4266 | 9.3243 | 690 | 0.7846 | 0.5962 | 0.7846 | 0.8858 |
| 0.4266 | 9.3514 | 692 | 0.7732 | 0.5962 | 0.7732 | 0.8793 |
| 0.4266 | 9.3784 | 694 | 0.7750 | 0.5962 | 0.7750 | 0.8803 |
| 0.4266 | 9.4054 | 696 | 0.7900 | 0.6357 | 0.7900 | 0.8888 |
| 0.4266 | 9.4324 | 698 | 0.8111 | 0.6357 | 0.8111 | 0.9006 |
| 0.4266 | 9.4595 | 700 | 0.8383 | 0.6357 | 0.8383 | 0.9156 |
| 0.4266 | 9.4865 | 702 | 0.8694 | 0.5878 | 0.8694 | 0.9324 |
| 0.4266 | 9.5135 | 704 | 0.9062 | 0.5878 | 0.9062 | 0.9520 |
| 0.4266 | 9.5405 | 706 | 0.9435 | 0.5878 | 0.9435 | 0.9713 |
| 0.4266 | 9.5676 | 708 | 0.9757 | 0.5878 | 0.9757 | 0.9878 |
| 0.4266 | 9.5946 | 710 | 1.0077 | 0.5878 | 1.0077 | 1.0038 |
| 0.4266 | 9.6216 | 712 | 1.0241 | 0.5709 | 1.0241 | 1.0120 |
| 0.4266 | 9.6486 | 714 | 1.0343 | 0.5709 | 1.0343 | 1.0170 |
| 0.4266 | 9.6757 | 716 | 1.0368 | 0.5709 | 1.0368 | 1.0182 |
| 0.4266 | 9.7027 | 718 | 1.0395 | 0.5709 | 1.0395 | 1.0195 |
| 0.4266 | 9.7297 | 720 | 1.0370 | 0.5709 | 1.0370 | 1.0183 |
| 0.4266 | 9.7568 | 722 | 1.0303 | 0.5709 | 1.0303 | 1.0150 |
| 0.4266 | 9.7838 | 724 | 1.0237 | 0.5709 | 1.0237 | 1.0118 |
| 0.4266 | 9.8108 | 726 | 1.0173 | 0.5978 | 1.0173 | 1.0086 |
| 0.4266 | 9.8378 | 728 | 1.0140 | 0.5878 | 1.0140 | 1.0070 |
| 0.4266 | 9.8649 | 730 | 1.0087 | 0.5878 | 1.0087 | 1.0043 |
| 0.4266 | 9.8919 | 732 | 1.0041 | 0.5878 | 1.0041 | 1.0021 |
| 0.4266 | 9.9189 | 734 | 1.0005 | 0.5878 | 1.0005 | 1.0003 |
| 0.4266 | 9.9459 | 736 | 0.9968 | 0.5878 | 0.9968 | 0.9984 |
| 0.4266 | 9.9730 | 738 | 0.9940 | 0.5878 | 0.9940 | 0.9970 |
| 0.4266 | 10.0 | 740 | 0.9927 | 0.5878 | 0.9927 | 0.9963 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
jebish7/bge-small-en-v1.5_MNSR_12 | jebish7 | 2024-11-25T15:48:13Z | 10 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:29545",
"loss:MultipleNegativesSymmetricRankingLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-25T15:48:06Z | ---
base_model: BAAI/bge-small-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:29545
- loss:MultipleNegativesSymmetricRankingLoss
widget:
- source_sentence: In terms of audited accounts submission for an Applicant, could
you clarify the scenarios in which the Regulator might agree that a reviewed pro
forma statement of financial position is not needed, and what factors would be
considered in making that determination?
sentences:
- "DocumentID: 1 | PassageID: 4.2.1.(3) | Passage: Where the regulator in another\
\ jurisdiction does not permit the implementation of policies, procedures, systems\
\ and controls consistent with these Rules, the Relevant Person must:\n(a)\tinform\
\ the Regulator in writing immediately; and\n(b)\tapply appropriate additional\
\ measures to manage the money laundering risks posed by the relevant branch or\
\ subsidiary."
- "DocumentID: 11 | PassageID: 2.3.15.(4) | Passage: The Applicant must submit to\
\ the Regulator the following records, as applicable:\n(a)\tAudited accounts,\
\ for the purposes of this Rule and Rule 2.3.2(1), for the last three full financial\
\ years, noting that:\n(i)\tif the Applicant applies for admission less than ninety\
\ days after the end of its last financial year, unless the Applicant has audited\
\ accounts for its latest full financial year, the accounts may be for the three\
\ years to the end of the previous financial year, but must also include audited\
\ or reviewed accounts for its most recent semi-annual financial reporting period;\
\ and\n(ii)\tif the Applicant applies for admission more than six months and seventy-five\
\ days after the end of its last financial year, audited or reviewed accounts\
\ for its most recent semi-annual financial reporting period (or longer period\
\ if available).\n(b)\tUnless the Regulator agrees it is not needed, a reviewed\
\ pro forma statement of financial position. The review must be conducted by an\
\ accredited professional auditor of the company or an independent accountant."
- 'DocumentID: 36 | PassageID: D.1.3. | Passage: Principle 1 β Oversight and responsibility
of climate-related financial risk exposures.Certain functions related to the management
of climate-related financial risks may be delegated, but, as with other risks,
the board is ultimately responsible and accountable for monitoring, managing and
overseeing climate-related risks for the financial firm.
'
- source_sentence: A financial institution is interested in multiple designations,
including the ADGM Green Fund and ADGM Green Bond. For each application, what
fee will the institution incur?
sentences:
- 'DocumentID: 31 | PassageID: 63) | Passage: INITIAL DISCLOSURE OF MATERIAL ESTIMATES.
Disclosure of material estimates of Contingent Resources
Section 2.3 of the PRMS Guidelines states that Contingent Resources may be assigned
for Petroleum Projects that are dependent on βtechnology under developmentβ, and
further recommended that a number of guidelines are followed in order to distinguish
these estimates from those that should be classified as Unrecoverable Petroleum. By
way of Rule 12.10.1(3), the FSRA fully supports and requires compliance with what
is set out in the PRMS Guidelines.
'
- 'DocumentID: 19 | PassageID: 40) | Passage: REGULATORY REQUIREMENTS FOR AUTHORISED
PERSONS ENGAGED IN REGULATED ACTIVITIES IN RELATION TO VIRTUAL ASSETS
Anti-Money Laundering and Countering Financing of Terrorism
On 21 June 2019, FATF released a revised Guidance for a Risk-Based Approach (RBA)
for VAs and VASPs, as well as an Interpretative Note for Recommendation 15. This
built upon previous FATF statements by clarifying a RBA for Anti-Money Laundering
and Countering the Financing of Terrorism (βAML/CFTβ) purposes. The basic principle
underlying the FATF Guidelines is that VASPs are expected to βidentify, assess,
and take effective action to mitigate their ML/TF risksβ with respect to VAs.
'
- "DocumentID: 4 | PassageID: 10.1.1 | Passage: A Person applying to the Regulator\
\ for any of the following designations:\n(a)\tADGM Green Fund;\n(b)\tADGM Climate\
\ Transition Fund;\n(c)\tADGM Green Portfolio;\n(d)\tADGM Climate Transition Portfolio;\n\
(e)\tADGM Green Bond; or\n(f)\tADGM Sustainability Linked Bond\nmust pay to the\
\ Regulator an application fee of $2,000."
- source_sentence: How does the ADGM expect Authorised Persons to incorporate the
eligibility of collateral types into their overall risk management framework,
particularly concerning Islamic finance principles?
sentences:
- 'DocumentID: 17 | PassageID: Schedule 1.Part 2.Chapter 5.42.(2) | Passage: In
determining for the purposes of sub-paragraph β(1)β(b) whether Deposits are accepted
only on particular occasions, regard is to be had to the frequency of those occasions
and to any characteristics distinguishing them from each other.'
- "DocumentID: 9 | PassageID: 6.8.5 | Passage: \n(a)\tA Fund Manager of an Islamic\
\ REIT may obtain financing either directly or through its Special Purpose Vehicle\
\ up to 65% of the total gross asset value of the Fund provided that such financing\
\ is provided in a Shari'a-compliant manner.\n(b)\tUpon becoming aware that the\
\ borrowing limit set out in 6.8.5(a) has been exceeded, the Fund Manager shall:\n\
(c)\timmediately inform Unitholders and the Regulator of the details of the breach\
\ and the proposed remedial action;\n(d)\tuse its best endeavours to reduce the\
\ excess borrowings;\n(e)\tnot permit the Fund to engage in additional borrowing;\
\ and\n(f)\tinform Unitholders and the Regulator on a regular basis as to the\
\ progress of the remedial action."
- 'DocumentID: 9 | PassageID: 5.1.1.Guidance.(ii) | Passage: The prudential Category
for Islamic Financial Institutions and other Authorised Persons (acting through
an Islamic Window) undertaking the Regulated Activity of Managing PSIAs (which
may be either a Restricted PSIA or an Unrestricted PSIA) is determined in accordance
with PRU Rule 1.3. An Authorised Person which Manages PSIAs (whether as an Islamic
Financial Institution or through an Islamic Window) must comply with the requirements
in PRU in relation to specific prudential requirements relating to Trading Book
and Non-Trading Book activities, including Credit Risk, Market Risk, Liquidity
Risk and Group Risk.'
- source_sentence: Can you please detail the specific Anti-Money Laundering (AML)
and Countering Financing of Terrorism (CFT) measures and controls that our firm
must have in place when dealing with Spot Commodities as per the FSRA's requirements?
sentences:
- 'DocumentID: 34 | PassageID: 65) | Passage: REGULATORY REQUIREMENTS - SPOT COMMODITY
ACTIVITIES
Sanctions
Pursuant to AML Rule 11.2.1(1), an Authorised Person must have arrangements in
place to ensure that only Spot Commodities that are not subject to sanctions or
associated with an entity in the supply chain that is itself subject to a sanction,
are used as part of its Regulated Activities, or utilised as part of a delivery
and/or storage facility operated by itself (or by any third parties it uses). In
demonstrating compliance with the Rule, an Authorised Person must have powers
to resolve any breach in a timely fashion, such as taking emergency action itself
or by compelling the delivery and/or storage facility to take appropriate action. The
FSRA expects this to include the Authorised Person having the ability to sanction
a Member, market participant or the delivery and/or storage facility for acts
or omissions that compromise compliance with applicable sanctions.
'
- "DocumentID: 18 | PassageID: 3.2 | Passage: Financial Services Permissions. VC\
\ Managers operating in ADGM require a Financial Services Permission (βFSPβ) to\
\ undertake any Regulated Activity pertaining to VC Funds and/or co-investments\
\ by third parties in VC Funds. The Regulated Activities covered by the FSP will\
\ be dependent on the VC Managersβ investment strategy and business model.\n(a)\t\
Managing a Collective Investment Fund: this includes carrying out fund management\
\ activities in respect of a VC Fund.\n(b)\tAdvising on Investments or Credit\
\ : for VC Managers these activities will be restricted to activities related\
\ to co-investment alongside a VC Fund which the VC Manager manages, such as recommending\
\ that a client invest in an investee company alongside the VC Fund and on the\
\ strategy and structure required to make the investment.\n(c)\tArranging Deals\
\ in Investments: VC Managers may also wish to make arrangements to facilitate\
\ co-investments in the investee company.\nAuthorisation fees and supervision\
\ fees for a VC Manager are capped at USD 10,000 regardless of whether one or\
\ both of the additional Regulated Activities in b) and c) above in relation to\
\ co-investments are included in its FSP. The FSP will include restrictions appropriate\
\ to the business model of a VC Manager."
- 'DocumentID: 24 | PassageID: 3.9 | Passage: Principle 2 β High Standards for Authorisation.
This discerning approach is shown by the FSRAβs power to only permit VAs that
it deems βacceptableβ, as determined by risk factors such as security and traceability,
in order to prevent the build-up of risk from illiquid or immature assets. Additionally,
we do not permit stablecoins based on the algorithmic model of valuation to the
underlying fiat currency.'
- source_sentence: What are the common scenarios or instances where assets and liabilities
are not covered by the bases of accounting in Rule 5.3.2, and how should an Insurer
address these in their reporting?
sentences:
- 'DocumentID: 1 | PassageID: 14.4.1.Guidance.1. | Passage: Relevant Persons are
reminded that in accordance with Federal AML Legislation, Relevant Persons or
any of their Employees must not tip off any Person, that is, inform any Person
that he is being scrutinised, or investigated by any other competent authority,
for possible involvement in suspicious Transactions or activity related to money
laundering or terrorist financing.'
- "DocumentID: 12 | PassageID: 5.3.1.Guidance | Passage: \nThe exceptions provided\
\ in this Chapter relate to the following:\na.\tspecific Rules in respect of certain\
\ assets and liabilities, intended to achieve a regulatory objective not achieved\
\ by application of either or both of the bases of accounting set out in Rule\
\ β5.3.2;\nb.\tassets and liabilities that are not dealt with in either or both\
\ of the bases of accounting set out in Rule β5.3.2; and\nc.\tthe overriding power\
\ of the Regulator, set out in Rule β5.1.6, to require an Insurer to adopt a particular\
\ measurement for a specific asset or liability."
- 'DocumentID: 1 | PassageID: 6.2.1.Guidance.2. | Passage: The risk assessment under
Rule β6.2.1(c) should identify actions to mitigate risks associated with undertaking
NFTF business generally, and the use of eKYC specifically. This is because distinct
risks are often likely to arise where business is conducted entirely in an NFTF
manner, compared to when the business relationship includes a mix of face-to-face
and NFTF interactions. The assessment should make reference to risk mitigation
measures recommended by the Regulator, a competent authority of the U.A.E., FATF,
and other relevant bodies.
'
---
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("jebish7/bge-small-en-v1.5_MNSR_20")
# Run inference
sentences = [
'What are the common scenarios or instances where assets and liabilities are not covered by the bases of accounting in Rule 5.3.2, and how should an Insurer address these in their reporting?',
'DocumentID: 12 | PassageID: 5.3.1.Guidance | Passage: \nThe exceptions provided in this Chapter relate to the following:\na.\tspecific Rules in respect of certain assets and liabilities, intended to achieve a regulatory objective not achieved by application of either or both of the bases of accounting set out in Rule \u200e5.3.2;\nb.\tassets and liabilities that are not dealt with in either or both of the bases of accounting set out in Rule \u200e5.3.2; and\nc.\tthe overriding power of the Regulator, set out in Rule \u200e5.1.6, to require an Insurer to adopt a particular measurement for a specific asset or liability.',
'DocumentID: 1 | PassageID: 14.4.1.Guidance.1. | Passage: Relevant Persons are reminded that in accordance with Federal AML Legislation, Relevant Persons or any of their Employees must not tip off any Person, that is, inform any Person that he is being scrutinised, or investigated by any other competent authority, for possible involvement in suspicious Transactions or activity related to money laundering or terrorist financing.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 29,545 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 34.95 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 132.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>If a financial institution offers Money Remittance as one of its services, under what circumstances is it deemed to be holding Relevant Money and therefore subject to regulatory compliance (a)?</code> | <code>DocumentID: 13 | PassageID: 3.7.1.Guidance.1. | Passage: An Authorised Person is considered to be holding Relevant Money and subject to (a) where it offers Payment Services alongside currency exchange or Money Remittance.<br></code> |
| <code>What are the consequences for a Recognised Body or Authorised Person if they fail to comply with ADGM's requirements regarding severance payments?</code> | <code>DocumentID: 7 | PassageID: APP1.A1.2.Guidance.9. | Passage: Severance payments. Where an Authorised Person or Recognised Body provides discretionary payouts on termination of employment ("severance payments", also called "golden parachutes"), such payment should generally be subject to appropriate limits or shareholder approval. In any case, such payouts should be aligned with the Authorised Person or Recognised Body's overall financial condition and performance over an appropriate time horizon and should not be payable in the case of failure or threatened failure of the Authorised Person or Recognised Body, particularly to an individual whose actions may have contributed to the failure or potential failure of the Authorised Person or Recognised Body.<br></code> |
| <code>If a Public Fund is structured as an Investment Trust, to whom should the Fund Manager report the review findings regarding delegated Regulated Activities or outsourced functions?</code> | <code>DocumentID: 6 | PassageID: PART 5.12.12.8.(1) | Passage: A Fund Manager or the Trustee of a Public Fund, which has delegated any Regulated Activities or outsourced any functions, must conduct a review of the carrying out of the relevant activities or functions by the Service Provider and present the findings of the review to either:<br>(a) the Fund's Governing Body every 6 months at the Fund's board meeting; or<br>(b) in the case of a Fund structured as an Investment Trust, to the Trustee.</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.2165 | 100 | 1.4357 |
| 0.4329 | 200 | 0.9589 |
| 0.6494 | 300 | 0.9193 |
| 0.8658 | 400 | 0.8542 |
| 1.0823 | 500 | 0.8643 |
| 1.2987 | 600 | 0.8135 |
| 1.5152 | 700 | 0.7658 |
| 1.7316 | 800 | 0.7454 |
| 1.9481 | 900 | 0.7477 |
| 2.1645 | 1000 | 0.7586 |
| 2.3810 | 1100 | 0.6978 |
| 2.5974 | 1200 | 0.7152 |
| 2.8139 | 1300 | 0.6866 |
| 0.2165 | 100 | 0.7049 |
| 0.4329 | 200 | 0.6651 |
| 0.6494 | 300 | 0.6942 |
| 0.8658 | 400 | 0.6695 |
| 1.0823 | 500 | 0.7048 |
| 1.2987 | 600 | 0.636 |
| 1.5152 | 700 | 0.5984 |
| 1.7316 | 800 | 0.6001 |
| 1.9481 | 900 | 0.6096 |
| 2.1645 | 1000 | 0.6313 |
| 2.3810 | 1100 | 0.5437 |
| 2.5974 | 1200 | 0.5716 |
| 2.8139 | 1300 | 0.5634 |
| 0.2165 | 100 | 0.5708 |
| 0.4329 | 200 | 0.5263 |
| 0.6494 | 300 | 0.5716 |
| 0.8658 | 400 | 0.5547 |
| 1.0823 | 500 | 0.5922 |
| 1.2987 | 600 | 0.5306 |
| 1.5152 | 700 | 0.4802 |
| 1.7316 | 800 | 0.4948 |
| 1.9481 | 900 | 0.512 |
| 2.1645 | 1000 | 0.532 |
| 2.3810 | 1100 | 0.4349 |
| 2.5974 | 1200 | 0.465 |
| 2.8139 | 1300 | 0.4657 |
| 0.2165 | 100 | 0.4757 |
| 0.4329 | 200 | 0.4193 |
| 0.6494 | 300 | 0.4815 |
| 0.8658 | 400 | 0.4715 |
| 1.0823 | 500 | 0.5156 |
| 1.2987 | 600 | 0.4341 |
| 1.5152 | 700 | 0.3942 |
| 1.7316 | 800 | 0.4242 |
| 1.9481 | 900 | 0.4342 |
| 2.1645 | 1000 | 0.4512 |
| 2.3810 | 1100 | 0.3505 |
| 2.5974 | 1200 | 0.3879 |
| 2.8139 | 1300 | 0.3959 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q4_K_M-GGUF | Triangle104 | 2024-11-25T15:48:06Z | 72 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2",
"base_model:quantized:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-18T14:10:33Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2`](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) for more details on the model.
---
Model details:
-
This is an uncensored version of Qwen2.5-14B-Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
Important Note This version is an improvement over the previous one Qwen2.5-14B-Instruct-abliterated.
Usage
You can use this model in your applications by loading it with Hugging Face's transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-abliterated-v2-Q4_K_M-GGUF --hf-file qwen2.5-14b-instruct-abliterated-v2-q4_k_m.gguf -c 2048
```
|
dgambettaphd/M_gen0_run1_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | 2024-11-25T15:47:43Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-11-25T15:45:44Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
camidenecken/RoBERTa-RM1-v2-4-rm-v18 | camidenecken | 2024-11-25T15:47:27Z | 146 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:46:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ehristoforu/RQwen-v0.2 | ehristoforu | 2024-11-25T15:45:22Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:ehristoforu/RQwen-v0.1",
"base_model:finetune:ehristoforu/RQwen-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-24T21:36:16Z | ---
base_model: ehristoforu/RQwen-v0.1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** ehristoforu/RQwen-v0.1
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen1.5-32B-llamafied-i1-GGUF | mradermacher | 2024-11-25T15:45:00Z | 6 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mightbe/Qwen1.5-32B-llamafied",
"base_model:quantized:mightbe/Qwen1.5-32B-llamafied",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-25T07:25:51Z | ---
base_model: mightbe/Qwen1.5-32B-llamafied
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: qwen
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mightbe/Qwen1.5-32B-llamafied
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ1_S.gguf) | i1-IQ1_S | 7.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ2_S.gguf) | i1-IQ2_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q2_K.gguf) | i1-Q2_K | 12.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ3_M.gguf) | i1-IQ3_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen1.5-32B-llamafied-i1-GGUF/resolve/main/Qwen1.5-32B-llamafied.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF | mradermacher | 2024-11-25T15:44:03Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"base_model:juengsi/DT-SL-SOLAR-10.7B-v0.1",
"base_model:quantized:juengsi/DT-SL-SOLAR-10.7B-v0.1",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-24T02:41:37Z | ---
base_model: juengsi/DT-SL-SOLAR-10.7B-v0.1
language:
- ko
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/juengsi/DT-SL-SOLAR-10.7B-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/DT-SL-SOLAR-10.7B-v0.1-i1-GGUF/resolve/main/DT-SL-SOLAR-10.7B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
camidenecken/RoBERTa-RM1-v2-4-rm-v16 | camidenecken | 2024-11-25T15:42:37Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:42:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k15_task5_organization_fold0 | MayBashendy | 2024-11-25T15:41:52Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T15:34:46Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k15_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k15_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8264
- Qwk: 0.6531
- Mse: 0.8264
- Rmse: 0.9091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0290 | 2 | 1.8024 | 0.1557 | 1.8024 | 1.3425 |
| No log | 0.0580 | 4 | 0.9368 | 0.2123 | 0.9368 | 0.9679 |
| No log | 0.0870 | 6 | 1.2599 | -0.0332 | 1.2599 | 1.1224 |
| No log | 0.1159 | 8 | 1.5849 | -0.0839 | 1.5849 | 1.2589 |
| No log | 0.1449 | 10 | 1.3336 | -0.0830 | 1.3336 | 1.1548 |
| No log | 0.1739 | 12 | 1.1867 | -0.0830 | 1.1867 | 1.0894 |
| No log | 0.2029 | 14 | 1.0137 | 0.2748 | 1.0137 | 1.0068 |
| No log | 0.2319 | 16 | 0.9296 | 0.2578 | 0.9296 | 0.9641 |
| No log | 0.2609 | 18 | 0.9274 | 0.2578 | 0.9274 | 0.9630 |
| No log | 0.2899 | 20 | 0.9673 | 0.2578 | 0.9673 | 0.9835 |
| No log | 0.3188 | 22 | 0.9856 | 0.2578 | 0.9856 | 0.9928 |
| No log | 0.3478 | 24 | 0.9837 | 0.2578 | 0.9837 | 0.9918 |
| No log | 0.3768 | 26 | 0.9796 | 0.3050 | 0.9796 | 0.9898 |
| No log | 0.4058 | 28 | 0.9681 | 0.3962 | 0.9681 | 0.9839 |
| No log | 0.4348 | 30 | 0.9262 | 0.2578 | 0.9262 | 0.9624 |
| No log | 0.4638 | 32 | 0.8998 | 0.2578 | 0.8998 | 0.9486 |
| No log | 0.4928 | 34 | 0.8821 | 0.2578 | 0.8821 | 0.9392 |
| No log | 0.5217 | 36 | 0.8242 | 0.3050 | 0.8242 | 0.9079 |
| No log | 0.5507 | 38 | 0.7302 | 0.3511 | 0.7302 | 0.8545 |
| No log | 0.5797 | 40 | 0.6948 | 0.4386 | 0.6948 | 0.8335 |
| No log | 0.6087 | 42 | 0.6765 | 0.4792 | 0.6765 | 0.8225 |
| No log | 0.6377 | 44 | 0.6974 | 0.5578 | 0.6974 | 0.8351 |
| No log | 0.6667 | 46 | 0.6709 | 0.5578 | 0.6709 | 0.8191 |
| No log | 0.6957 | 48 | 0.6486 | 0.5189 | 0.6486 | 0.8054 |
| No log | 0.7246 | 50 | 0.6392 | 0.4792 | 0.6392 | 0.7995 |
| No log | 0.7536 | 52 | 0.6937 | 0.4386 | 0.6937 | 0.8329 |
| No log | 0.7826 | 54 | 0.8085 | 0.4984 | 0.8085 | 0.8992 |
| No log | 0.8116 | 56 | 0.8995 | 0.4865 | 0.8995 | 0.9484 |
| No log | 0.8406 | 58 | 0.8962 | 0.5349 | 0.8962 | 0.9467 |
| No log | 0.8696 | 60 | 0.8745 | 0.4984 | 0.8745 | 0.9351 |
| No log | 0.8986 | 62 | 0.8373 | 0.5033 | 0.8373 | 0.9151 |
| No log | 0.9275 | 64 | 0.8048 | 0.4481 | 0.8048 | 0.8971 |
| No log | 0.9565 | 66 | 0.7857 | 0.4481 | 0.7857 | 0.8864 |
| No log | 0.9855 | 68 | 0.8050 | 0.4481 | 0.8050 | 0.8972 |
| No log | 1.0145 | 70 | 0.8626 | 0.3972 | 0.8626 | 0.9288 |
| No log | 1.0435 | 72 | 0.8698 | 0.3972 | 0.8698 | 0.9326 |
| No log | 1.0725 | 74 | 0.8404 | 0.3972 | 0.8404 | 0.9168 |
| No log | 1.1014 | 76 | 0.8118 | 0.4098 | 0.8118 | 0.9010 |
| No log | 1.1304 | 78 | 0.8122 | 0.4481 | 0.8122 | 0.9012 |
| No log | 1.1594 | 80 | 0.7622 | 0.4481 | 0.7622 | 0.8730 |
| No log | 1.1884 | 82 | 0.6923 | 0.4855 | 0.6923 | 0.8320 |
| No log | 1.2174 | 84 | 0.6520 | 0.4855 | 0.6520 | 0.8075 |
| No log | 1.2464 | 86 | 0.6218 | 0.4855 | 0.6218 | 0.7885 |
| No log | 1.2754 | 88 | 0.6067 | 0.4855 | 0.6067 | 0.7789 |
| No log | 1.3043 | 90 | 0.5981 | 0.4855 | 0.5981 | 0.7734 |
| No log | 1.3333 | 92 | 0.5835 | 0.4855 | 0.5835 | 0.7639 |
| No log | 1.3623 | 94 | 0.5872 | 0.5068 | 0.5872 | 0.7663 |
| No log | 1.3913 | 96 | 0.5976 | 0.5550 | 0.5976 | 0.7731 |
| No log | 1.4203 | 98 | 0.6285 | 0.5972 | 0.6285 | 0.7928 |
| No log | 1.4493 | 100 | 0.6557 | 0.5972 | 0.6557 | 0.8098 |
| No log | 1.4783 | 102 | 0.7140 | 0.5900 | 0.7140 | 0.8450 |
| No log | 1.5072 | 104 | 0.7587 | 0.6101 | 0.7587 | 0.8710 |
| No log | 1.5362 | 106 | 0.7755 | 0.7234 | 0.7755 | 0.8806 |
| No log | 1.5652 | 108 | 0.7651 | 0.5852 | 0.7651 | 0.8747 |
| No log | 1.5942 | 110 | 0.8988 | 0.5604 | 0.8988 | 0.9481 |
| No log | 1.6232 | 112 | 1.0143 | 0.5127 | 1.0143 | 1.0071 |
| No log | 1.6522 | 114 | 1.0909 | 0.4856 | 1.0909 | 1.0445 |
| No log | 1.6812 | 116 | 1.1933 | 0.4856 | 1.1933 | 1.0924 |
| No log | 1.7101 | 118 | 1.1622 | 0.4856 | 1.1622 | 1.0781 |
| No log | 1.7391 | 120 | 1.0052 | 0.5127 | 1.0052 | 1.0026 |
| No log | 1.7681 | 122 | 0.8246 | 0.5662 | 0.8246 | 0.9081 |
| No log | 1.7971 | 124 | 0.6453 | 0.5856 | 0.6453 | 0.8033 |
| No log | 1.8261 | 126 | 0.6422 | 0.5252 | 0.6422 | 0.8014 |
| No log | 1.8551 | 128 | 0.7198 | 0.6821 | 0.7198 | 0.8484 |
| No log | 1.8841 | 130 | 0.6836 | 0.5518 | 0.6836 | 0.8268 |
| No log | 1.9130 | 132 | 0.7778 | 0.5325 | 0.7778 | 0.8819 |
| No log | 1.9420 | 134 | 0.9025 | 0.6253 | 0.9025 | 0.9500 |
| No log | 1.9710 | 136 | 0.9417 | 0.5604 | 0.9417 | 0.9704 |
| No log | 2.0 | 138 | 0.8092 | 0.5415 | 0.8092 | 0.8996 |
| No log | 2.0290 | 140 | 0.7175 | 0.5662 | 0.7175 | 0.8471 |
| No log | 2.0580 | 142 | 0.7718 | 0.5662 | 0.7718 | 0.8785 |
| No log | 2.0870 | 144 | 0.8637 | 0.5842 | 0.8637 | 0.9294 |
| No log | 2.1159 | 146 | 0.9954 | 0.6025 | 0.9954 | 0.9977 |
| No log | 2.1449 | 148 | 0.9901 | 0.5842 | 0.9901 | 0.9950 |
| No log | 2.1739 | 150 | 0.9563 | 0.6275 | 0.9563 | 0.9779 |
| No log | 2.2029 | 152 | 0.8274 | 0.6275 | 0.8274 | 0.9096 |
| No log | 2.2319 | 154 | 0.6751 | 0.6753 | 0.6751 | 0.8217 |
| No log | 2.2609 | 156 | 0.6281 | 0.7545 | 0.6281 | 0.7925 |
| No log | 2.2899 | 158 | 0.6008 | 0.7465 | 0.6008 | 0.7751 |
| No log | 2.3188 | 160 | 0.6131 | 0.7303 | 0.6131 | 0.7830 |
| No log | 2.3478 | 162 | 0.7549 | 0.6755 | 0.7549 | 0.8689 |
| No log | 2.3768 | 164 | 0.9873 | 0.6336 | 0.9873 | 0.9937 |
| No log | 2.4058 | 166 | 1.0973 | 0.4892 | 1.0973 | 1.0475 |
| No log | 2.4348 | 168 | 1.0485 | 0.6288 | 1.0485 | 1.0240 |
| No log | 2.4638 | 170 | 0.9448 | 0.6421 | 0.9448 | 0.9720 |
| No log | 2.4928 | 172 | 0.8788 | 0.6421 | 0.8788 | 0.9374 |
| No log | 2.5217 | 174 | 0.9522 | 0.6421 | 0.9522 | 0.9758 |
| No log | 2.5507 | 176 | 1.0199 | 0.6421 | 1.0199 | 1.0099 |
| No log | 2.5797 | 178 | 1.1291 | 0.5607 | 1.1291 | 1.0626 |
| No log | 2.6087 | 180 | 1.0863 | 0.5607 | 1.0863 | 1.0423 |
| No log | 2.6377 | 182 | 0.8938 | 0.6421 | 0.8938 | 0.9454 |
| No log | 2.6667 | 184 | 0.7280 | 0.6512 | 0.7280 | 0.8532 |
| No log | 2.6957 | 186 | 0.6785 | 0.6126 | 0.6785 | 0.8237 |
| No log | 2.7246 | 188 | 0.7311 | 0.6421 | 0.7311 | 0.8551 |
| No log | 2.7536 | 190 | 0.8141 | 0.5661 | 0.8141 | 0.9023 |
| No log | 2.7826 | 192 | 0.8225 | 0.5661 | 0.8225 | 0.9069 |
| No log | 2.8116 | 194 | 0.9049 | 0.6421 | 0.9049 | 0.9513 |
| No log | 2.8406 | 196 | 1.0170 | 0.6258 | 1.0170 | 1.0085 |
| No log | 2.8696 | 198 | 1.1327 | 0.5960 | 1.1327 | 1.0643 |
| No log | 2.8986 | 200 | 1.2052 | 0.4919 | 1.2052 | 1.0978 |
| No log | 2.9275 | 202 | 1.0716 | 0.5368 | 1.0716 | 1.0352 |
| No log | 2.9565 | 204 | 0.8246 | 0.6421 | 0.8246 | 0.9081 |
| No log | 2.9855 | 206 | 0.7093 | 0.6421 | 0.7093 | 0.8422 |
| No log | 3.0145 | 208 | 0.6376 | 0.6809 | 0.6376 | 0.7985 |
| No log | 3.0435 | 210 | 0.6626 | 0.6809 | 0.6626 | 0.8140 |
| No log | 3.0725 | 212 | 0.7270 | 0.6809 | 0.7270 | 0.8526 |
| No log | 3.1014 | 214 | 0.7713 | 0.6809 | 0.7713 | 0.8783 |
| No log | 3.1304 | 216 | 0.6997 | 0.6468 | 0.6997 | 0.8365 |
| No log | 3.1594 | 218 | 0.6141 | 0.7393 | 0.6141 | 0.7837 |
| No log | 3.1884 | 220 | 0.5988 | 0.8054 | 0.5988 | 0.7738 |
| No log | 3.2174 | 222 | 0.6037 | 0.8054 | 0.6037 | 0.7770 |
| No log | 3.2464 | 224 | 0.6216 | 0.7794 | 0.6216 | 0.7884 |
| No log | 3.2754 | 226 | 0.6893 | 0.6038 | 0.6893 | 0.8302 |
| No log | 3.3043 | 228 | 0.8140 | 0.6140 | 0.8140 | 0.9022 |
| No log | 3.3333 | 230 | 0.8383 | 0.6140 | 0.8383 | 0.9156 |
| No log | 3.3623 | 232 | 0.8325 | 0.6140 | 0.8325 | 0.9124 |
| No log | 3.3913 | 234 | 0.7691 | 0.7059 | 0.7691 | 0.8770 |
| No log | 3.4203 | 236 | 0.7757 | 0.7059 | 0.7757 | 0.8807 |
| No log | 3.4493 | 238 | 0.8229 | 0.7059 | 0.8229 | 0.9071 |
| No log | 3.4783 | 240 | 0.8765 | 0.6844 | 0.8765 | 0.9362 |
| No log | 3.5072 | 242 | 0.9402 | 0.6809 | 0.9402 | 0.9696 |
| No log | 3.5362 | 244 | 0.9022 | 0.6429 | 0.9022 | 0.9499 |
| No log | 3.5652 | 246 | 0.8204 | 0.6660 | 0.8204 | 0.9058 |
| No log | 3.5942 | 248 | 0.7650 | 0.7113 | 0.7650 | 0.8747 |
| No log | 3.6232 | 250 | 0.7439 | 0.7113 | 0.7439 | 0.8625 |
| No log | 3.6522 | 252 | 0.7561 | 0.6711 | 0.7561 | 0.8695 |
| No log | 3.6812 | 254 | 0.7925 | 0.6468 | 0.7925 | 0.8902 |
| No log | 3.7101 | 256 | 0.8968 | 0.6809 | 0.8968 | 0.9470 |
| No log | 3.7391 | 258 | 0.9697 | 0.6008 | 0.9697 | 0.9847 |
| No log | 3.7681 | 260 | 0.9694 | 0.6008 | 0.9694 | 0.9846 |
| No log | 3.7971 | 262 | 0.8727 | 0.6574 | 0.8727 | 0.9342 |
| No log | 3.8261 | 264 | 0.7684 | 0.6809 | 0.7684 | 0.8766 |
| No log | 3.8551 | 266 | 0.7277 | 0.6468 | 0.7277 | 0.8530 |
| No log | 3.8841 | 268 | 0.7225 | 0.6468 | 0.7225 | 0.8500 |
| No log | 3.9130 | 270 | 0.7308 | 0.6468 | 0.7308 | 0.8549 |
| No log | 3.9420 | 272 | 0.7305 | 0.6468 | 0.7305 | 0.8547 |
| No log | 3.9710 | 274 | 0.7952 | 0.6531 | 0.7952 | 0.8918 |
| No log | 4.0 | 276 | 0.8573 | 0.6531 | 0.8573 | 0.9259 |
| No log | 4.0290 | 278 | 0.8052 | 0.6468 | 0.8052 | 0.8973 |
| No log | 4.0580 | 280 | 0.7424 | 0.6468 | 0.7424 | 0.8616 |
| No log | 4.0870 | 282 | 0.7500 | 0.6468 | 0.7500 | 0.8660 |
| No log | 4.1159 | 284 | 0.7640 | 0.6468 | 0.7640 | 0.8741 |
| No log | 4.1449 | 286 | 0.7450 | 0.6468 | 0.7450 | 0.8632 |
| No log | 4.1739 | 288 | 0.7643 | 0.6468 | 0.7643 | 0.8742 |
| No log | 4.2029 | 290 | 0.7368 | 0.6468 | 0.7368 | 0.8584 |
| No log | 4.2319 | 292 | 0.6785 | 0.6560 | 0.6785 | 0.8237 |
| No log | 4.2609 | 294 | 0.6539 | 0.7059 | 0.6539 | 0.8086 |
| No log | 4.2899 | 296 | 0.6681 | 0.7059 | 0.6681 | 0.8174 |
| No log | 4.3188 | 298 | 0.6875 | 0.6811 | 0.6875 | 0.8292 |
| No log | 4.3478 | 300 | 0.7607 | 0.6468 | 0.7607 | 0.8722 |
| No log | 4.3768 | 302 | 0.8315 | 0.6468 | 0.8315 | 0.9119 |
| No log | 4.4058 | 304 | 0.8376 | 0.6468 | 0.8376 | 0.9152 |
| No log | 4.4348 | 306 | 0.8015 | 0.6468 | 0.8015 | 0.8953 |
| No log | 4.4638 | 308 | 0.7706 | 0.6468 | 0.7706 | 0.8778 |
| No log | 4.4928 | 310 | 0.7329 | 0.6468 | 0.7329 | 0.8561 |
| No log | 4.5217 | 312 | 0.6953 | 0.6468 | 0.6953 | 0.8339 |
| No log | 4.5507 | 314 | 0.6587 | 0.7393 | 0.6587 | 0.8116 |
| No log | 4.5797 | 316 | 0.6542 | 0.7647 | 0.6542 | 0.8088 |
| No log | 4.6087 | 318 | 0.6397 | 0.7794 | 0.6397 | 0.7998 |
| No log | 4.6377 | 320 | 0.6456 | 0.7647 | 0.6456 | 0.8035 |
| No log | 4.6667 | 322 | 0.6578 | 0.7393 | 0.6578 | 0.8110 |
| No log | 4.6957 | 324 | 0.6591 | 0.7393 | 0.6591 | 0.8118 |
| No log | 4.7246 | 326 | 0.6746 | 0.7136 | 0.6746 | 0.8213 |
| No log | 4.7536 | 328 | 0.6488 | 0.7393 | 0.6488 | 0.8055 |
| No log | 4.7826 | 330 | 0.6226 | 0.7531 | 0.6226 | 0.7891 |
| No log | 4.8116 | 332 | 0.6281 | 0.7531 | 0.6281 | 0.7925 |
| No log | 4.8406 | 334 | 0.6657 | 0.7393 | 0.6657 | 0.8159 |
| No log | 4.8696 | 336 | 0.7196 | 0.6560 | 0.7196 | 0.8483 |
| No log | 4.8986 | 338 | 0.7465 | 0.6909 | 0.7465 | 0.8640 |
| No log | 4.9275 | 340 | 0.7714 | 0.6909 | 0.7714 | 0.8783 |
| No log | 4.9565 | 342 | 0.8487 | 0.6383 | 0.8487 | 0.9212 |
| No log | 4.9855 | 344 | 0.9536 | 0.6304 | 0.9536 | 0.9765 |
| No log | 5.0145 | 346 | 1.0217 | 0.5842 | 1.0217 | 1.0108 |
| No log | 5.0435 | 348 | 1.0407 | 0.5783 | 1.0407 | 1.0201 |
| No log | 5.0725 | 350 | 0.9806 | 0.5842 | 0.9806 | 0.9903 |
| No log | 5.1014 | 352 | 0.9200 | 0.6074 | 0.9200 | 0.9592 |
| No log | 5.1304 | 354 | 0.8996 | 0.6304 | 0.8996 | 0.9485 |
| No log | 5.1594 | 356 | 0.8744 | 0.6304 | 0.8744 | 0.9351 |
| No log | 5.1884 | 358 | 0.8370 | 0.6809 | 0.8370 | 0.9149 |
| No log | 5.2174 | 360 | 0.8647 | 0.6809 | 0.8647 | 0.9299 |
| No log | 5.2464 | 362 | 0.8676 | 0.6809 | 0.8676 | 0.9315 |
| No log | 5.2754 | 364 | 0.8185 | 0.6909 | 0.8185 | 0.9047 |
| No log | 5.3043 | 366 | 0.8312 | 0.6809 | 0.8312 | 0.9117 |
| No log | 5.3333 | 368 | 0.8260 | 0.6809 | 0.8260 | 0.9088 |
| No log | 5.3623 | 370 | 0.8350 | 0.6809 | 0.8350 | 0.9138 |
| No log | 5.3913 | 372 | 0.8550 | 0.6809 | 0.8550 | 0.9247 |
| No log | 5.4203 | 374 | 0.8042 | 0.6468 | 0.8042 | 0.8968 |
| No log | 5.4493 | 376 | 0.7855 | 0.6560 | 0.7855 | 0.8863 |
| No log | 5.4783 | 378 | 0.8032 | 0.6468 | 0.8032 | 0.8962 |
| No log | 5.5072 | 380 | 0.7981 | 0.6468 | 0.7981 | 0.8934 |
| No log | 5.5362 | 382 | 0.7652 | 0.6560 | 0.7652 | 0.8747 |
| No log | 5.5652 | 384 | 0.7828 | 0.6468 | 0.7828 | 0.8848 |
| No log | 5.5942 | 386 | 0.8279 | 0.6531 | 0.8279 | 0.9099 |
| No log | 5.6232 | 388 | 0.8908 | 0.6844 | 0.8908 | 0.9438 |
| No log | 5.6522 | 390 | 0.9109 | 0.7243 | 0.9109 | 0.9544 |
| No log | 5.6812 | 392 | 0.8770 | 0.6964 | 0.8770 | 0.9365 |
| No log | 5.7101 | 394 | 0.8829 | 0.6602 | 0.8829 | 0.9396 |
| No log | 5.7391 | 396 | 0.8993 | 0.6602 | 0.8993 | 0.9483 |
| No log | 5.7681 | 398 | 0.9828 | 0.6602 | 0.9828 | 0.9913 |
| No log | 5.7971 | 400 | 1.0805 | 0.6535 | 1.0805 | 1.0395 |
| No log | 5.8261 | 402 | 1.1116 | 0.6535 | 1.1116 | 1.0543 |
| No log | 5.8551 | 404 | 1.0604 | 0.6602 | 1.0604 | 1.0298 |
| No log | 5.8841 | 406 | 1.0080 | 0.6602 | 1.0080 | 1.0040 |
| No log | 5.9130 | 408 | 0.9739 | 0.6140 | 0.9739 | 0.9869 |
| No log | 5.9420 | 410 | 0.9053 | 0.6809 | 0.9053 | 0.9515 |
| No log | 5.9710 | 412 | 0.9057 | 0.6809 | 0.9057 | 0.9517 |
| No log | 6.0 | 414 | 0.9518 | 0.6809 | 0.9518 | 0.9756 |
| No log | 6.0290 | 416 | 0.9558 | 0.6809 | 0.9558 | 0.9777 |
| No log | 6.0580 | 418 | 0.9331 | 0.6809 | 0.9331 | 0.9660 |
| No log | 6.0870 | 420 | 0.8973 | 0.6809 | 0.8973 | 0.9473 |
| No log | 6.1159 | 422 | 0.8853 | 0.6809 | 0.8853 | 0.9409 |
| No log | 6.1449 | 424 | 0.9231 | 0.6809 | 0.9231 | 0.9608 |
| No log | 6.1739 | 426 | 0.9574 | 0.6809 | 0.9574 | 0.9785 |
| No log | 6.2029 | 428 | 0.9549 | 0.6809 | 0.9549 | 0.9772 |
| No log | 6.2319 | 430 | 0.9522 | 0.6809 | 0.9522 | 0.9758 |
| No log | 6.2609 | 432 | 0.8977 | 0.6809 | 0.8977 | 0.9475 |
| No log | 6.2899 | 434 | 0.8644 | 0.6809 | 0.8644 | 0.9298 |
| No log | 6.3188 | 436 | 0.8896 | 0.6809 | 0.8896 | 0.9432 |
| No log | 6.3478 | 438 | 0.9329 | 0.6055 | 0.9329 | 0.9659 |
| No log | 6.3768 | 440 | 0.9783 | 0.6055 | 0.9783 | 0.9891 |
| No log | 6.4058 | 442 | 0.9492 | 0.6055 | 0.9492 | 0.9743 |
| No log | 6.4348 | 444 | 0.9167 | 0.6055 | 0.9167 | 0.9574 |
| No log | 6.4638 | 446 | 0.8850 | 0.6055 | 0.8850 | 0.9408 |
| No log | 6.4928 | 448 | 0.8678 | 0.6055 | 0.8678 | 0.9315 |
| No log | 6.5217 | 450 | 0.8215 | 0.5714 | 0.8215 | 0.9064 |
| No log | 6.5507 | 452 | 0.7790 | 0.6468 | 0.7790 | 0.8826 |
| No log | 6.5797 | 454 | 0.7182 | 0.6560 | 0.7182 | 0.8475 |
| No log | 6.6087 | 456 | 0.6941 | 0.6560 | 0.6941 | 0.8331 |
| No log | 6.6377 | 458 | 0.7090 | 0.6560 | 0.7090 | 0.8420 |
| No log | 6.6667 | 460 | 0.7267 | 0.6560 | 0.7267 | 0.8525 |
| No log | 6.6957 | 462 | 0.7193 | 0.6560 | 0.7193 | 0.8481 |
| No log | 6.7246 | 464 | 0.7290 | 0.6560 | 0.7290 | 0.8538 |
| No log | 6.7536 | 466 | 0.7237 | 0.6560 | 0.7237 | 0.8507 |
| No log | 6.7826 | 468 | 0.7165 | 0.6617 | 0.7165 | 0.8465 |
| No log | 6.8116 | 470 | 0.7063 | 0.6617 | 0.7063 | 0.8404 |
| No log | 6.8406 | 472 | 0.7013 | 0.6617 | 0.7013 | 0.8375 |
| No log | 6.8696 | 474 | 0.6923 | 0.6617 | 0.6923 | 0.8321 |
| No log | 6.8986 | 476 | 0.7350 | 0.6617 | 0.7350 | 0.8573 |
| No log | 6.9275 | 478 | 0.7730 | 0.6617 | 0.7730 | 0.8792 |
| No log | 6.9565 | 480 | 0.7880 | 0.6617 | 0.7880 | 0.8877 |
| No log | 6.9855 | 482 | 0.8042 | 0.6617 | 0.8042 | 0.8968 |
| No log | 7.0145 | 484 | 0.8179 | 0.6617 | 0.8179 | 0.9044 |
| No log | 7.0435 | 486 | 0.8521 | 0.5825 | 0.8521 | 0.9231 |
| No log | 7.0725 | 488 | 0.8804 | 0.5825 | 0.8804 | 0.9383 |
| No log | 7.1014 | 490 | 0.8931 | 0.6140 | 0.8931 | 0.9451 |
| No log | 7.1304 | 492 | 0.8885 | 0.6140 | 0.8885 | 0.9426 |
| No log | 7.1594 | 494 | 0.8616 | 0.5882 | 0.8616 | 0.9282 |
| No log | 7.1884 | 496 | 0.8547 | 0.5882 | 0.8547 | 0.9245 |
| No log | 7.2174 | 498 | 0.8503 | 0.5882 | 0.8503 | 0.9221 |
| 0.3883 | 7.2464 | 500 | 0.8344 | 0.5882 | 0.8344 | 0.9135 |
| 0.3883 | 7.2754 | 502 | 0.8270 | 0.5882 | 0.8270 | 0.9094 |
| 0.3883 | 7.3043 | 504 | 0.7905 | 0.5882 | 0.7905 | 0.8891 |
| 0.3883 | 7.3333 | 506 | 0.7402 | 0.5882 | 0.7402 | 0.8603 |
| 0.3883 | 7.3623 | 508 | 0.6900 | 0.6617 | 0.6900 | 0.8307 |
| 0.3883 | 7.3913 | 510 | 0.6545 | 0.7136 | 0.6545 | 0.8090 |
| 0.3883 | 7.4203 | 512 | 0.6539 | 0.7136 | 0.6539 | 0.8086 |
| 0.3883 | 7.4493 | 514 | 0.6486 | 0.7136 | 0.6486 | 0.8054 |
| 0.3883 | 7.4783 | 516 | 0.6462 | 0.7136 | 0.6462 | 0.8039 |
| 0.3883 | 7.5072 | 518 | 0.6666 | 0.6560 | 0.6666 | 0.8164 |
| 0.3883 | 7.5362 | 520 | 0.6822 | 0.6560 | 0.6822 | 0.8260 |
| 0.3883 | 7.5652 | 522 | 0.7153 | 0.6560 | 0.7153 | 0.8458 |
| 0.3883 | 7.5942 | 524 | 0.7305 | 0.6560 | 0.7305 | 0.8547 |
| 0.3883 | 7.6232 | 526 | 0.7243 | 0.6560 | 0.7243 | 0.8510 |
| 0.3883 | 7.6522 | 528 | 0.7059 | 0.6560 | 0.7059 | 0.8401 |
| 0.3883 | 7.6812 | 530 | 0.7049 | 0.6560 | 0.7049 | 0.8396 |
| 0.3883 | 7.7101 | 532 | 0.7322 | 0.6560 | 0.7322 | 0.8557 |
| 0.3883 | 7.7391 | 534 | 0.7582 | 0.6560 | 0.7582 | 0.8707 |
| 0.3883 | 7.7681 | 536 | 0.7702 | 0.6909 | 0.7702 | 0.8776 |
| 0.3883 | 7.7971 | 538 | 0.7910 | 0.6909 | 0.7910 | 0.8894 |
| 0.3883 | 7.8261 | 540 | 0.8067 | 0.6909 | 0.8067 | 0.8982 |
| 0.3883 | 7.8551 | 542 | 0.8323 | 0.6909 | 0.8323 | 0.9123 |
| 0.3883 | 7.8841 | 544 | 0.8737 | 0.6809 | 0.8737 | 0.9347 |
| 0.3883 | 7.9130 | 546 | 0.9126 | 0.6055 | 0.9126 | 0.9553 |
| 0.3883 | 7.9420 | 548 | 0.9518 | 0.6055 | 0.9518 | 0.9756 |
| 0.3883 | 7.9710 | 550 | 0.9558 | 0.6055 | 0.9558 | 0.9777 |
| 0.3883 | 8.0 | 552 | 0.9409 | 0.6055 | 0.9409 | 0.9700 |
| 0.3883 | 8.0290 | 554 | 0.9199 | 0.6055 | 0.9199 | 0.9591 |
| 0.3883 | 8.0580 | 556 | 0.9071 | 0.6140 | 0.9071 | 0.9524 |
| 0.3883 | 8.0870 | 558 | 0.8813 | 0.6055 | 0.8813 | 0.9388 |
| 0.3883 | 8.1159 | 560 | 0.8421 | 0.6122 | 0.8421 | 0.9177 |
| 0.3883 | 8.1449 | 562 | 0.8001 | 0.6909 | 0.8001 | 0.8945 |
| 0.3883 | 8.1739 | 564 | 0.7835 | 0.6909 | 0.7835 | 0.8852 |
| 0.3883 | 8.2029 | 566 | 0.7754 | 0.6909 | 0.7754 | 0.8806 |
| 0.3883 | 8.2319 | 568 | 0.7653 | 0.6909 | 0.7653 | 0.8748 |
| 0.3883 | 8.2609 | 570 | 0.7539 | 0.6560 | 0.7539 | 0.8683 |
| 0.3883 | 8.2899 | 572 | 0.7428 | 0.6617 | 0.7428 | 0.8619 |
| 0.3883 | 8.3188 | 574 | 0.7533 | 0.6617 | 0.7533 | 0.8679 |
| 0.3883 | 8.3478 | 576 | 0.7583 | 0.6617 | 0.7583 | 0.8708 |
| 0.3883 | 8.3768 | 578 | 0.7633 | 0.6617 | 0.7633 | 0.8737 |
| 0.3883 | 8.4058 | 580 | 0.7538 | 0.6617 | 0.7538 | 0.8682 |
| 0.3883 | 8.4348 | 582 | 0.7486 | 0.6617 | 0.7486 | 0.8652 |
| 0.3883 | 8.4638 | 584 | 0.7381 | 0.6617 | 0.7381 | 0.8591 |
| 0.3883 | 8.4928 | 586 | 0.7415 | 0.6617 | 0.7415 | 0.8611 |
| 0.3883 | 8.5217 | 588 | 0.7382 | 0.6617 | 0.7382 | 0.8592 |
| 0.3883 | 8.5507 | 590 | 0.7291 | 0.6617 | 0.7291 | 0.8539 |
| 0.3883 | 8.5797 | 592 | 0.7379 | 0.6617 | 0.7379 | 0.8590 |
| 0.3883 | 8.6087 | 594 | 0.7490 | 0.6617 | 0.7490 | 0.8654 |
| 0.3883 | 8.6377 | 596 | 0.7677 | 0.6617 | 0.7677 | 0.8762 |
| 0.3883 | 8.6667 | 598 | 0.7887 | 0.6617 | 0.7887 | 0.8881 |
| 0.3883 | 8.6957 | 600 | 0.7905 | 0.6617 | 0.7905 | 0.8891 |
| 0.3883 | 8.7246 | 602 | 0.7871 | 0.6617 | 0.7871 | 0.8872 |
| 0.3883 | 8.7536 | 604 | 0.7876 | 0.6617 | 0.7876 | 0.8875 |
| 0.3883 | 8.7826 | 606 | 0.7754 | 0.6617 | 0.7754 | 0.8806 |
| 0.3883 | 8.8116 | 608 | 0.7721 | 0.6617 | 0.7721 | 0.8787 |
| 0.3883 | 8.8406 | 610 | 0.7695 | 0.6617 | 0.7695 | 0.8772 |
| 0.3883 | 8.8696 | 612 | 0.7764 | 0.6617 | 0.7764 | 0.8811 |
| 0.3883 | 8.8986 | 614 | 0.7737 | 0.6617 | 0.7737 | 0.8796 |
| 0.3883 | 8.9275 | 616 | 0.7751 | 0.6617 | 0.7751 | 0.8804 |
| 0.3883 | 8.9565 | 618 | 0.7758 | 0.6617 | 0.7758 | 0.8808 |
| 0.3883 | 8.9855 | 620 | 0.7834 | 0.6617 | 0.7834 | 0.8851 |
| 0.3883 | 9.0145 | 622 | 0.7955 | 0.6617 | 0.7955 | 0.8919 |
| 0.3883 | 9.0435 | 624 | 0.8031 | 0.6617 | 0.8031 | 0.8961 |
| 0.3883 | 9.0725 | 626 | 0.8099 | 0.6531 | 0.8099 | 0.8999 |
| 0.3883 | 9.1014 | 628 | 0.8143 | 0.6531 | 0.8143 | 0.9024 |
| 0.3883 | 9.1304 | 630 | 0.8123 | 0.6531 | 0.8123 | 0.9013 |
| 0.3883 | 9.1594 | 632 | 0.8131 | 0.6531 | 0.8131 | 0.9017 |
| 0.3883 | 9.1884 | 634 | 0.8223 | 0.6531 | 0.8223 | 0.9068 |
| 0.3883 | 9.2174 | 636 | 0.8245 | 0.6531 | 0.8245 | 0.9080 |
| 0.3883 | 9.2464 | 638 | 0.8172 | 0.6531 | 0.8172 | 0.9040 |
| 0.3883 | 9.2754 | 640 | 0.8115 | 0.6617 | 0.8115 | 0.9009 |
| 0.3883 | 9.3043 | 642 | 0.8017 | 0.6617 | 0.8017 | 0.8954 |
| 0.3883 | 9.3333 | 644 | 0.7910 | 0.6617 | 0.7910 | 0.8894 |
| 0.3883 | 9.3623 | 646 | 0.7920 | 0.6617 | 0.7920 | 0.8899 |
| 0.3883 | 9.3913 | 648 | 0.7997 | 0.6617 | 0.7997 | 0.8942 |
| 0.3883 | 9.4203 | 650 | 0.8152 | 0.6531 | 0.8152 | 0.9029 |
| 0.3883 | 9.4493 | 652 | 0.8333 | 0.6531 | 0.8333 | 0.9129 |
| 0.3883 | 9.4783 | 654 | 0.8426 | 0.6531 | 0.8426 | 0.9180 |
| 0.3883 | 9.5072 | 656 | 0.8516 | 0.6531 | 0.8516 | 0.9228 |
| 0.3883 | 9.5362 | 658 | 0.8557 | 0.6531 | 0.8557 | 0.9250 |
| 0.3883 | 9.5652 | 660 | 0.8584 | 0.6531 | 0.8584 | 0.9265 |
| 0.3883 | 9.5942 | 662 | 0.8606 | 0.6531 | 0.8606 | 0.9277 |
| 0.3883 | 9.6232 | 664 | 0.8583 | 0.6531 | 0.8583 | 0.9264 |
| 0.3883 | 9.6522 | 666 | 0.8581 | 0.6531 | 0.8581 | 0.9263 |
| 0.3883 | 9.6812 | 668 | 0.8557 | 0.6531 | 0.8557 | 0.9250 |
| 0.3883 | 9.7101 | 670 | 0.8509 | 0.6531 | 0.8509 | 0.9225 |
| 0.3883 | 9.7391 | 672 | 0.8452 | 0.6531 | 0.8452 | 0.9193 |
| 0.3883 | 9.7681 | 674 | 0.8379 | 0.6531 | 0.8379 | 0.9154 |
| 0.3883 | 9.7971 | 676 | 0.8328 | 0.6531 | 0.8328 | 0.9126 |
| 0.3883 | 9.8261 | 678 | 0.8311 | 0.6531 | 0.8311 | 0.9116 |
| 0.3883 | 9.8551 | 680 | 0.8284 | 0.6531 | 0.8284 | 0.9101 |
| 0.3883 | 9.8841 | 682 | 0.8274 | 0.6531 | 0.8274 | 0.9096 |
| 0.3883 | 9.9130 | 684 | 0.8262 | 0.6531 | 0.8262 | 0.9089 |
| 0.3883 | 9.9420 | 686 | 0.8260 | 0.6531 | 0.8260 | 0.9089 |
| 0.3883 | 9.9710 | 688 | 0.8262 | 0.6531 | 0.8262 | 0.9090 |
| 0.3883 | 10.0 | 690 | 0.8264 | 0.6531 | 0.8264 | 0.9091 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
sjleee/model | sjleee | 2024-11-25T15:41:39Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T15:39:25Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sjleee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
swsqy/hongkongcomic | swsqy | 2024-11-25T15:35:32Z | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-11-25T15:01:54Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: efef
output:
url: images/generated-image (1).png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: vev
---
# asvew
<Gallery />
## Trigger words
You should use `vev` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/swsqy/hongkongcomic/tree/main) them in the Files & versions tab.
|
jadechoghari/mar | jadechoghari | 2024-11-25T15:32:38Z | 547 | 11 | diffusers | [
"diffusers",
"safetensors",
"unconditional-image-generation",
"arxiv:2406.11838",
"license:mit",
"diffusers:MARModel",
"region:us"
] | unconditional-image-generation | 2024-09-07T22:13:50Z | ---
library_name: diffusers
license: mit
pipeline_tag: unconditional-image-generation
---
# Autoregressive Image Generation without Vector Quantization
## About
This model (MAR) introduces a novel approach to autoregressive image generation by eliminating the need for vector quantization.
Instead of relying on discrete tokens, the model operates in a continuous-valued space using a diffusion process to model the per-token probability distribution.
By employing a Diffusion Loss function, the model achieves efficient and high-quality image generation while benefiting from the speed advantages of autoregressive sequence modeling.
This approach simplifies the generation process, making it applicable to broader continuous-valued domains beyond just image synthesis.
It is based on [this paper](https://arxiv.org/abs/2406.11838)
## Usage:
You can easily load it through the Hugging Face `DiffusionPipeline` and optionally customize various parameters such as the model type, number of steps, and class labels.
```python
from diffusers import DiffusionPipeline
# load the pretrained model
pipeline = DiffusionPipeline.from_pretrained("jadechoghari/mar", trust_remote_code=True, custom_pipeline="jadechoghari/mar")
# generate an image with the model
generated_image = pipeline(
model_type="mar_huge", # choose from 'mar_base', 'mar_large', or 'mar_huge'
seed=42, # set a seed for reproducibility
num_ar_steps=64, # number of autoregressive steps
class_labels=[207, 360, 388], # provide valid ImageNet class labels
cfg_scale=4, # classifier-free guidance scale
output_dir="./images", # directory to save generated images
cfg_schedule = "constant", # choose between 'constant' (suggested) and 'linear'
)
# display the generated image
generated_image.show()
```
<p align="center">
<img src="https://github.com/LTH14/mar/raw/main/demo/visual.png" width="500">
</p>
This code loads the model, configures it for image generation, and saves the output to a specified directory.
We offer three pre-trained MAR models in `safetensors` format:
- `mar-base.safetensors`
- `mar-large.safetensors`
- `mar-huge.safetensors`
<!-- <p align="center">
<img src="https://github.com/LTH14/mar/raw/main/demo/visual.png" width="720">
</p> -->
This is a Hugging Face Diffusers/GPU implementation of the paper [Autoregressive Image Generation without Vector Quantization](https://arxiv.org/abs/2406.11838)
The Official PyTorch Implementation is released in [this repository](https://github.com/LTH14/mar)
```
@article{li2024autoregressive,
title={Autoregressive Image Generation without Vector Quantization},
author={Li, Tianhong and Tian, Yonglong and Li, He and Deng, Mingyang and He, Kaiming},
journal={arXiv preprint arXiv:2406.11838},
year={2024}
}
```
## Acknowledgements
We thank Congyue Deng and Xinlei Chen for helpful discussion. We thank
Google TPU Research Cloud (TRC) for granting us access to TPUs, and Google Cloud Platform for
supporting GPU resources.
A large portion of codes in this repo is based on [MAE](https://github.com/facebookresearch/mae), [MAGE](https://github.com/LTH14/mage) and [DiT](https://github.com/facebookresearch/DiT).
## Contact
If you have any questions, feel free to contact me through email ([email protected]). Enjoy! |
maxg73872/biobert-v1.1-finetuned-medmcqa-2024-11-25-T15-27-54 | maxg73872 | 2024-11-25T15:31:35Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-25T15:31:13Z | ---
library_name: transformers
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert-v1.1-finetuned-medmcqa-2024-11-25-T15-27-54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-finetuned-medmcqa-2024-11-25-T15-27-54
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3558
- Accuracy: 0.6667
- F1: 0.6494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 1.3754 | 0.9978 | 57 | 1.3558 | 0.6667 | 0.6494 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k10_task5_organization_fold0 | MayBashendy | 2024-11-25T15:28:15Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:59:23Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k10_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k10_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7596
- Qwk: 0.8124
- Mse: 0.7596
- Rmse: 0.8715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0408 | 2 | 1.5967 | 0.0796 | 1.5967 | 1.2636 |
| No log | 0.0816 | 4 | 1.0881 | 0.1519 | 1.0881 | 1.0431 |
| No log | 0.1224 | 6 | 1.1846 | 0.0942 | 1.1846 | 1.0884 |
| No log | 0.1633 | 8 | 1.1257 | 0.3195 | 1.1257 | 1.0610 |
| No log | 0.2041 | 10 | 0.9881 | 0.3915 | 0.9881 | 0.9940 |
| No log | 0.2449 | 12 | 0.9075 | 0.3906 | 0.9075 | 0.9526 |
| No log | 0.2857 | 14 | 0.8722 | 0.3906 | 0.8722 | 0.9339 |
| No log | 0.3265 | 16 | 0.7573 | 0.3103 | 0.7573 | 0.8702 |
| No log | 0.3673 | 18 | 0.7269 | 0.2910 | 0.7269 | 0.8526 |
| No log | 0.4082 | 20 | 0.7590 | 0.3358 | 0.7590 | 0.8712 |
| No log | 0.4490 | 22 | 0.7332 | 0.2910 | 0.7332 | 0.8563 |
| No log | 0.4898 | 24 | 0.7014 | 0.2910 | 0.7014 | 0.8375 |
| No log | 0.5306 | 26 | 0.7144 | 0.2453 | 0.7144 | 0.8452 |
| No log | 0.5714 | 28 | 0.7190 | 0.4620 | 0.7190 | 0.8479 |
| No log | 0.6122 | 30 | 0.7341 | 0.4780 | 0.7341 | 0.8568 |
| No log | 0.6531 | 32 | 0.7221 | 0.5349 | 0.7221 | 0.8498 |
| No log | 0.6939 | 34 | 0.6550 | 0.5405 | 0.6550 | 0.8093 |
| No log | 0.7347 | 36 | 0.6098 | 0.4754 | 0.6098 | 0.7809 |
| No log | 0.7755 | 38 | 0.6895 | 0.5357 | 0.6895 | 0.8304 |
| No log | 0.8163 | 40 | 0.8857 | 0.3493 | 0.8857 | 0.9411 |
| No log | 0.8571 | 42 | 0.9389 | 0.3220 | 0.9389 | 0.9689 |
| No log | 0.8980 | 44 | 0.8184 | 0.5455 | 0.8184 | 0.9047 |
| No log | 0.9388 | 46 | 0.6389 | 0.5357 | 0.6389 | 0.7993 |
| No log | 0.9796 | 48 | 0.5778 | 0.4946 | 0.5778 | 0.7601 |
| No log | 1.0204 | 50 | 0.5964 | 0.6100 | 0.5964 | 0.7723 |
| No log | 1.0612 | 52 | 0.7225 | 0.5573 | 0.7225 | 0.8500 |
| No log | 1.1020 | 54 | 0.7785 | 0.5511 | 0.7785 | 0.8823 |
| No log | 1.1429 | 56 | 0.7712 | 0.4460 | 0.7712 | 0.8782 |
| No log | 1.1837 | 58 | 0.7464 | 0.4539 | 0.7464 | 0.8640 |
| No log | 1.2245 | 60 | 0.7121 | 0.4495 | 0.7121 | 0.8439 |
| No log | 1.2653 | 62 | 0.6853 | 0.6094 | 0.6853 | 0.8278 |
| No log | 1.3061 | 64 | 0.6382 | 0.6094 | 0.6382 | 0.7989 |
| No log | 1.3469 | 66 | 0.5605 | 0.6011 | 0.5605 | 0.7487 |
| No log | 1.3878 | 68 | 0.5251 | 0.6215 | 0.5251 | 0.7246 |
| No log | 1.4286 | 70 | 0.5318 | 0.6501 | 0.5318 | 0.7293 |
| No log | 1.4694 | 72 | 0.5331 | 0.6995 | 0.5331 | 0.7302 |
| No log | 1.5102 | 74 | 0.5073 | 0.75 | 0.5073 | 0.7123 |
| No log | 1.5510 | 76 | 0.5052 | 0.6765 | 0.5052 | 0.7108 |
| No log | 1.5918 | 78 | 0.5809 | 0.6394 | 0.5809 | 0.7622 |
| No log | 1.6327 | 80 | 0.6185 | 0.6222 | 0.6185 | 0.7865 |
| No log | 1.6735 | 82 | 0.6036 | 0.7136 | 0.6036 | 0.7769 |
| No log | 1.7143 | 84 | 0.5466 | 0.7037 | 0.5466 | 0.7393 |
| No log | 1.7551 | 86 | 0.5108 | 0.7305 | 0.5108 | 0.7147 |
| No log | 1.7959 | 88 | 0.4948 | 0.7458 | 0.4948 | 0.7034 |
| No log | 1.8367 | 90 | 0.4822 | 0.6765 | 0.4822 | 0.6944 |
| No log | 1.8776 | 92 | 0.5204 | 0.6875 | 0.5204 | 0.7214 |
| No log | 1.9184 | 94 | 0.5654 | 0.6154 | 0.5654 | 0.7520 |
| No log | 1.9592 | 96 | 0.6387 | 0.6287 | 0.6387 | 0.7992 |
| No log | 2.0 | 98 | 0.7510 | 0.6509 | 0.7510 | 0.8666 |
| No log | 2.0408 | 100 | 0.9415 | 0.5501 | 0.9415 | 0.9703 |
| No log | 2.0816 | 102 | 0.9839 | 0.5059 | 0.9839 | 0.9919 |
| No log | 2.1224 | 104 | 0.8308 | 0.5789 | 0.8308 | 0.9115 |
| No log | 2.1633 | 106 | 0.6502 | 0.6560 | 0.6502 | 0.8064 |
| No log | 2.2041 | 108 | 0.5964 | 0.6560 | 0.5964 | 0.7723 |
| No log | 2.2449 | 110 | 0.5545 | 0.6438 | 0.5545 | 0.7447 |
| No log | 2.2857 | 112 | 0.5525 | 0.6829 | 0.5525 | 0.7433 |
| No log | 2.3265 | 114 | 0.5276 | 0.6011 | 0.5276 | 0.7263 |
| No log | 2.3673 | 116 | 0.5133 | 0.6491 | 0.5133 | 0.7165 |
| No log | 2.4082 | 118 | 0.5038 | 0.6766 | 0.5038 | 0.7098 |
| No log | 2.4490 | 120 | 0.4895 | 0.6766 | 0.4895 | 0.6996 |
| No log | 2.4898 | 122 | 0.4866 | 0.6766 | 0.4866 | 0.6975 |
| No log | 2.5306 | 124 | 0.4879 | 0.7176 | 0.4879 | 0.6985 |
| No log | 2.5714 | 126 | 0.4988 | 0.7176 | 0.4988 | 0.7063 |
| No log | 2.6122 | 128 | 0.5131 | 0.7176 | 0.5131 | 0.7163 |
| No log | 2.6531 | 130 | 0.5364 | 0.7511 | 0.5364 | 0.7324 |
| No log | 2.6939 | 132 | 0.5744 | 0.7511 | 0.5744 | 0.7579 |
| No log | 2.7347 | 134 | 0.6561 | 0.7018 | 0.6561 | 0.8100 |
| No log | 2.7755 | 136 | 0.7019 | 0.7018 | 0.7019 | 0.8378 |
| No log | 2.8163 | 138 | 0.6957 | 0.7018 | 0.6957 | 0.8341 |
| No log | 2.8571 | 140 | 0.6523 | 0.7018 | 0.6523 | 0.8076 |
| No log | 2.8980 | 142 | 0.6622 | 0.7186 | 0.6622 | 0.8138 |
| No log | 2.9388 | 144 | 0.6787 | 0.7419 | 0.6787 | 0.8238 |
| No log | 2.9796 | 146 | 0.7177 | 0.7688 | 0.7177 | 0.8472 |
| No log | 3.0204 | 148 | 0.8504 | 0.7481 | 0.8504 | 0.9222 |
| No log | 3.0612 | 150 | 0.9751 | 0.6545 | 0.9751 | 0.9875 |
| No log | 3.1020 | 152 | 1.0566 | 0.5674 | 1.0566 | 1.0279 |
| No log | 3.1429 | 154 | 0.9847 | 0.6316 | 0.9847 | 0.9923 |
| No log | 3.1837 | 156 | 0.8195 | 0.7481 | 0.8195 | 0.9053 |
| No log | 3.2245 | 158 | 0.6955 | 0.7718 | 0.6955 | 0.8339 |
| No log | 3.2653 | 160 | 0.6636 | 0.7419 | 0.6636 | 0.8146 |
| No log | 3.3061 | 162 | 0.6710 | 0.7410 | 0.6710 | 0.8191 |
| No log | 3.3469 | 164 | 0.7158 | 0.7495 | 0.7158 | 0.8460 |
| No log | 3.3878 | 166 | 0.7904 | 0.7463 | 0.7904 | 0.8891 |
| No log | 3.4286 | 168 | 0.9328 | 0.6140 | 0.9328 | 0.9658 |
| No log | 3.4694 | 170 | 1.0793 | 0.5819 | 1.0793 | 1.0389 |
| No log | 3.5102 | 172 | 1.0576 | 0.6220 | 1.0576 | 1.0284 |
| No log | 3.5510 | 174 | 0.9793 | 0.6488 | 0.9793 | 0.9896 |
| No log | 3.5918 | 176 | 0.8872 | 0.7148 | 0.8872 | 0.9419 |
| No log | 3.6327 | 178 | 0.8512 | 0.7148 | 0.8512 | 0.9226 |
| No log | 3.6735 | 180 | 0.8218 | 0.7547 | 0.8218 | 0.9065 |
| No log | 3.7143 | 182 | 0.7739 | 0.7373 | 0.7739 | 0.8797 |
| No log | 3.7551 | 184 | 0.6642 | 0.7495 | 0.6642 | 0.8150 |
| No log | 3.7959 | 186 | 0.6044 | 0.7465 | 0.6044 | 0.7774 |
| No log | 3.8367 | 188 | 0.6097 | 0.6526 | 0.6097 | 0.7808 |
| No log | 3.8776 | 190 | 0.5932 | 0.6526 | 0.5932 | 0.7702 |
| No log | 3.9184 | 192 | 0.5571 | 0.7481 | 0.5571 | 0.7464 |
| No log | 3.9592 | 194 | 0.5813 | 0.7511 | 0.5813 | 0.7624 |
| No log | 4.0 | 196 | 0.7261 | 0.7093 | 0.7261 | 0.8521 |
| No log | 4.0408 | 198 | 0.8491 | 0.6667 | 0.8491 | 0.9215 |
| No log | 4.0816 | 200 | 0.8792 | 0.6854 | 0.8792 | 0.9377 |
| No log | 4.1224 | 202 | 0.8034 | 0.7178 | 0.8034 | 0.8963 |
| No log | 4.1633 | 204 | 0.7524 | 0.7566 | 0.7524 | 0.8674 |
| No log | 4.2041 | 206 | 0.7773 | 0.6792 | 0.7773 | 0.8817 |
| No log | 4.2449 | 208 | 0.8202 | 0.6473 | 0.8202 | 0.9056 |
| No log | 4.2857 | 210 | 0.8196 | 0.6660 | 0.8196 | 0.9053 |
| No log | 4.3265 | 212 | 0.7618 | 0.7419 | 0.7618 | 0.8728 |
| No log | 4.3673 | 214 | 0.6819 | 0.7098 | 0.6819 | 0.8258 |
| No log | 4.4082 | 216 | 0.6287 | 0.6752 | 0.6287 | 0.7929 |
| No log | 4.4490 | 218 | 0.5989 | 0.6752 | 0.5989 | 0.7739 |
| No log | 4.4898 | 220 | 0.5911 | 0.6814 | 0.5911 | 0.7688 |
| No log | 4.5306 | 222 | 0.6155 | 0.6919 | 0.6155 | 0.7845 |
| No log | 4.5714 | 224 | 0.6582 | 0.7588 | 0.6582 | 0.8113 |
| No log | 4.6122 | 226 | 0.7041 | 0.7588 | 0.7041 | 0.8391 |
| No log | 4.6531 | 228 | 0.7423 | 0.7588 | 0.7423 | 0.8616 |
| No log | 4.6939 | 230 | 0.8242 | 0.7125 | 0.8242 | 0.9079 |
| No log | 4.7347 | 232 | 0.8456 | 0.6374 | 0.8456 | 0.9196 |
| No log | 4.7755 | 234 | 0.8012 | 0.6374 | 0.8012 | 0.8951 |
| No log | 4.8163 | 236 | 0.7554 | 0.7566 | 0.7554 | 0.8692 |
| No log | 4.8571 | 238 | 0.7002 | 0.7269 | 0.7002 | 0.8368 |
| No log | 4.8980 | 240 | 0.6807 | 0.7267 | 0.6807 | 0.8251 |
| No log | 4.9388 | 242 | 0.6844 | 0.7267 | 0.6844 | 0.8273 |
| No log | 4.9796 | 244 | 0.6791 | 0.7419 | 0.6791 | 0.8241 |
| No log | 5.0204 | 246 | 0.6492 | 0.7419 | 0.6492 | 0.8058 |
| No log | 5.0612 | 248 | 0.6348 | 0.7419 | 0.6348 | 0.7968 |
| No log | 5.1020 | 250 | 0.6483 | 0.7511 | 0.6483 | 0.8052 |
| No log | 5.1429 | 252 | 0.6399 | 0.7511 | 0.6399 | 0.7999 |
| No log | 5.1837 | 254 | 0.6970 | 0.7566 | 0.6970 | 0.8349 |
| No log | 5.2245 | 256 | 0.8220 | 0.6584 | 0.8220 | 0.9066 |
| No log | 5.2653 | 258 | 0.9199 | 0.6367 | 0.9199 | 0.9591 |
| No log | 5.3061 | 260 | 0.9199 | 0.6367 | 0.9199 | 0.9591 |
| No log | 5.3469 | 262 | 0.9472 | 0.6098 | 0.9472 | 0.9732 |
| No log | 5.3878 | 264 | 0.9206 | 0.6367 | 0.9206 | 0.9595 |
| No log | 5.4286 | 266 | 0.8617 | 0.7043 | 0.8617 | 0.9283 |
| No log | 5.4694 | 268 | 0.7500 | 0.7347 | 0.7500 | 0.8660 |
| No log | 5.5102 | 270 | 0.6626 | 0.7588 | 0.6626 | 0.8140 |
| No log | 5.5510 | 272 | 0.6321 | 0.7267 | 0.6321 | 0.7950 |
| No log | 5.5918 | 274 | 0.6507 | 0.7588 | 0.6507 | 0.8067 |
| No log | 5.6327 | 276 | 0.7268 | 0.7351 | 0.7268 | 0.8525 |
| No log | 5.6735 | 278 | 0.7981 | 0.7125 | 0.7981 | 0.8934 |
| No log | 5.7143 | 280 | 0.8578 | 0.6791 | 0.8578 | 0.9262 |
| No log | 5.7551 | 282 | 0.8679 | 0.6791 | 0.8679 | 0.9316 |
| No log | 5.7959 | 284 | 0.8645 | 0.6374 | 0.8645 | 0.9298 |
| No log | 5.8367 | 286 | 0.8766 | 0.6374 | 0.8766 | 0.9363 |
| No log | 5.8776 | 288 | 0.9069 | 0.6374 | 0.9069 | 0.9523 |
| No log | 5.9184 | 290 | 0.9018 | 0.6791 | 0.9018 | 0.9496 |
| No log | 5.9592 | 292 | 0.8338 | 0.6792 | 0.8338 | 0.9131 |
| No log | 6.0 | 294 | 0.7810 | 0.6491 | 0.7810 | 0.8838 |
| No log | 6.0408 | 296 | 0.7502 | 0.7410 | 0.7502 | 0.8662 |
| No log | 6.0816 | 298 | 0.7310 | 0.7410 | 0.7310 | 0.8550 |
| No log | 6.1224 | 300 | 0.7104 | 0.7410 | 0.7104 | 0.8429 |
| No log | 6.1633 | 302 | 0.6709 | 0.7113 | 0.6709 | 0.8191 |
| No log | 6.2041 | 304 | 0.6581 | 0.7186 | 0.6581 | 0.8112 |
| No log | 6.2449 | 306 | 0.6595 | 0.7495 | 0.6595 | 0.8121 |
| No log | 6.2857 | 308 | 0.6746 | 0.7495 | 0.6746 | 0.8213 |
| No log | 6.3265 | 310 | 0.6760 | 0.7495 | 0.6760 | 0.8222 |
| No log | 6.3673 | 312 | 0.6620 | 0.7495 | 0.6620 | 0.8136 |
| No log | 6.4082 | 314 | 0.6325 | 0.7495 | 0.6325 | 0.7953 |
| No log | 6.4490 | 316 | 0.6217 | 0.7511 | 0.6217 | 0.7885 |
| No log | 6.4898 | 318 | 0.6453 | 0.7495 | 0.6453 | 0.8033 |
| No log | 6.5306 | 320 | 0.6853 | 0.7782 | 0.6853 | 0.8278 |
| No log | 6.5714 | 322 | 0.6900 | 0.7782 | 0.6900 | 0.8307 |
| No log | 6.6122 | 324 | 0.6648 | 0.7782 | 0.6648 | 0.8154 |
| No log | 6.6531 | 326 | 0.6314 | 0.7782 | 0.6314 | 0.7946 |
| No log | 6.6939 | 328 | 0.6474 | 0.7566 | 0.6474 | 0.8046 |
| No log | 6.7347 | 330 | 0.6863 | 0.7566 | 0.6863 | 0.8284 |
| No log | 6.7755 | 332 | 0.7375 | 0.7925 | 0.7375 | 0.8588 |
| No log | 6.8163 | 334 | 0.7687 | 0.7723 | 0.7687 | 0.8768 |
| No log | 6.8571 | 336 | 0.7429 | 0.7925 | 0.7429 | 0.8619 |
| No log | 6.8980 | 338 | 0.6947 | 0.7925 | 0.6947 | 0.8335 |
| No log | 6.9388 | 340 | 0.6627 | 0.7782 | 0.6627 | 0.8141 |
| No log | 6.9796 | 342 | 0.6583 | 0.7495 | 0.6583 | 0.8113 |
| No log | 7.0204 | 344 | 0.6726 | 0.7495 | 0.6726 | 0.8201 |
| No log | 7.0612 | 346 | 0.6993 | 0.7868 | 0.6993 | 0.8362 |
| No log | 7.1020 | 348 | 0.7069 | 0.7868 | 0.7069 | 0.8408 |
| No log | 7.1429 | 350 | 0.7218 | 0.7868 | 0.7218 | 0.8496 |
| No log | 7.1837 | 352 | 0.7421 | 0.8124 | 0.7421 | 0.8614 |
| No log | 7.2245 | 354 | 0.7384 | 0.7925 | 0.7384 | 0.8593 |
| No log | 7.2653 | 356 | 0.7213 | 0.7925 | 0.7213 | 0.8493 |
| No log | 7.3061 | 358 | 0.7155 | 0.7925 | 0.7155 | 0.8459 |
| No log | 7.3469 | 360 | 0.7213 | 0.7925 | 0.7213 | 0.8493 |
| No log | 7.3878 | 362 | 0.7565 | 0.7925 | 0.7565 | 0.8698 |
| No log | 7.4286 | 364 | 0.8072 | 0.6986 | 0.8072 | 0.8985 |
| No log | 7.4694 | 366 | 0.8530 | 0.6791 | 0.8530 | 0.9236 |
| No log | 7.5102 | 368 | 0.8593 | 0.6791 | 0.8593 | 0.9270 |
| No log | 7.5510 | 370 | 0.8429 | 0.6374 | 0.8429 | 0.9181 |
| No log | 7.5918 | 372 | 0.7928 | 0.6374 | 0.7928 | 0.8904 |
| No log | 7.6327 | 374 | 0.7633 | 0.7566 | 0.7633 | 0.8737 |
| No log | 7.6735 | 376 | 0.7530 | 0.7566 | 0.7530 | 0.8678 |
| No log | 7.7143 | 378 | 0.7206 | 0.7925 | 0.7206 | 0.8489 |
| No log | 7.7551 | 380 | 0.6593 | 0.7782 | 0.6593 | 0.8120 |
| No log | 7.7959 | 382 | 0.6078 | 0.7511 | 0.6078 | 0.7796 |
| No log | 7.8367 | 384 | 0.5953 | 0.7176 | 0.5953 | 0.7716 |
| No log | 7.8776 | 386 | 0.6071 | 0.7176 | 0.6071 | 0.7792 |
| No log | 7.9184 | 388 | 0.6186 | 0.7868 | 0.6186 | 0.7865 |
| No log | 7.9592 | 390 | 0.6234 | 0.7868 | 0.6234 | 0.7896 |
| No log | 8.0 | 392 | 0.6283 | 0.7868 | 0.6283 | 0.7927 |
| No log | 8.0408 | 394 | 0.6537 | 0.7868 | 0.6537 | 0.8085 |
| No log | 8.0816 | 396 | 0.6898 | 0.7868 | 0.6898 | 0.8305 |
| No log | 8.1224 | 398 | 0.7137 | 0.7868 | 0.7137 | 0.8448 |
| No log | 8.1633 | 400 | 0.7229 | 0.8124 | 0.7229 | 0.8502 |
| No log | 8.2041 | 402 | 0.7350 | 0.8124 | 0.7350 | 0.8573 |
| No log | 8.2449 | 404 | 0.7491 | 0.8124 | 0.7491 | 0.8655 |
| No log | 8.2857 | 406 | 0.7664 | 0.7925 | 0.7664 | 0.8754 |
| No log | 8.3265 | 408 | 0.7658 | 0.7925 | 0.7658 | 0.8751 |
| No log | 8.3673 | 410 | 0.7567 | 0.7925 | 0.7567 | 0.8699 |
| No log | 8.4082 | 412 | 0.7621 | 0.7925 | 0.7621 | 0.8730 |
| No log | 8.4490 | 414 | 0.7804 | 0.7723 | 0.7804 | 0.8834 |
| No log | 8.4898 | 416 | 0.8193 | 0.7148 | 0.8193 | 0.9051 |
| No log | 8.5306 | 418 | 0.8421 | 0.6713 | 0.8421 | 0.9177 |
| No log | 8.5714 | 420 | 0.8690 | 0.6098 | 0.8690 | 0.9322 |
| No log | 8.6122 | 422 | 0.8823 | 0.6098 | 0.8823 | 0.9393 |
| No log | 8.6531 | 424 | 0.8919 | 0.6098 | 0.8919 | 0.9444 |
| No log | 8.6939 | 426 | 0.8671 | 0.6278 | 0.8671 | 0.9312 |
| No log | 8.7347 | 428 | 0.8299 | 0.7340 | 0.8299 | 0.9110 |
| No log | 8.7755 | 430 | 0.7907 | 0.7925 | 0.7907 | 0.8892 |
| No log | 8.8163 | 432 | 0.7599 | 0.8124 | 0.7599 | 0.8717 |
| No log | 8.8571 | 434 | 0.7313 | 0.7868 | 0.7313 | 0.8552 |
| No log | 8.8980 | 436 | 0.7099 | 0.7868 | 0.7099 | 0.8426 |
| No log | 8.9388 | 438 | 0.6925 | 0.7868 | 0.6925 | 0.8321 |
| No log | 8.9796 | 440 | 0.6879 | 0.7868 | 0.6879 | 0.8294 |
| No log | 9.0204 | 442 | 0.6957 | 0.7868 | 0.6957 | 0.8341 |
| No log | 9.0612 | 444 | 0.7101 | 0.7868 | 0.7101 | 0.8427 |
| No log | 9.1020 | 446 | 0.7140 | 0.7868 | 0.7140 | 0.8450 |
| No log | 9.1429 | 448 | 0.7167 | 0.7868 | 0.7167 | 0.8466 |
| No log | 9.1837 | 450 | 0.7192 | 0.7868 | 0.7192 | 0.8480 |
| No log | 9.2245 | 452 | 0.7172 | 0.7868 | 0.7172 | 0.8469 |
| No log | 9.2653 | 454 | 0.7125 | 0.7868 | 0.7125 | 0.8441 |
| No log | 9.3061 | 456 | 0.7090 | 0.7868 | 0.7090 | 0.8420 |
| No log | 9.3469 | 458 | 0.7171 | 0.7868 | 0.7171 | 0.8468 |
| No log | 9.3878 | 460 | 0.7234 | 0.7868 | 0.7234 | 0.8505 |
| No log | 9.4286 | 462 | 0.7296 | 0.7868 | 0.7296 | 0.8542 |
| No log | 9.4694 | 464 | 0.7414 | 0.7868 | 0.7414 | 0.8610 |
| No log | 9.5102 | 466 | 0.7475 | 0.7868 | 0.7475 | 0.8646 |
| No log | 9.5510 | 468 | 0.7517 | 0.7868 | 0.7517 | 0.8670 |
| No log | 9.5918 | 470 | 0.7562 | 0.7868 | 0.7562 | 0.8696 |
| No log | 9.6327 | 472 | 0.7595 | 0.8124 | 0.7595 | 0.8715 |
| No log | 9.6735 | 474 | 0.7608 | 0.8124 | 0.7608 | 0.8722 |
| No log | 9.7143 | 476 | 0.7561 | 0.8124 | 0.7561 | 0.8695 |
| No log | 9.7551 | 478 | 0.7553 | 0.8124 | 0.7553 | 0.8691 |
| No log | 9.7959 | 480 | 0.7572 | 0.8124 | 0.7572 | 0.8701 |
| No log | 9.8367 | 482 | 0.7591 | 0.8124 | 0.7591 | 0.8713 |
| No log | 9.8776 | 484 | 0.7585 | 0.8124 | 0.7585 | 0.8709 |
| No log | 9.9184 | 486 | 0.7591 | 0.8124 | 0.7591 | 0.8713 |
| No log | 9.9592 | 488 | 0.7594 | 0.8124 | 0.7594 | 0.8714 |
| No log | 10.0 | 490 | 0.7596 | 0.8124 | 0.7596 | 0.8715 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF | mradermacher | 2024-11-25T15:27:17Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Deev124/hermes-llama3-roleplay-1000-v6",
"base_model:quantized:Deev124/hermes-llama3-roleplay-1000-v6",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-25T12:46:47Z | ---
base_model: Deev124/hermes-llama3-roleplay-1000-v6
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Deev124/hermes-llama3-roleplay-1000-v6
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-1000-v6-i1-GGUF/resolve/main/hermes-llama3-roleplay-1000-v6.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
maxg73872/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-medmcqa-2024-11-25-T15-21-21 | maxg73872 | 2024-11-25T15:25:34Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-25T15:25:12Z | ---
library_name: transformers
license: mit
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-medmcqa-2024-11-25-T15-21-21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-medmcqa-2024-11-25-T15-21-21
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1358
- Accuracy: 0.5238
- F1: 0.5281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 1.1639 | 0.9978 | 57 | 1.1358 | 0.5238 | 0.5281 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
dhanushreddy29/tshirt-design | dhanushreddy29 | 2024-11-25T15:22:04Z | 12 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-11-25T15:21:48Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: Tshirt Design, A cartoon image of a person juggling paintbrushes, gears,
and lightbulbs, with the phrase "Embrace the Mess" written in playful, graffiti-style
font. The background is a mix of vibrant splashes of paint and abstract doodles,
representing creativity in action, tsdsign
output:
url: samples/1732548090228__000001000_0.jpg
- text: Tshirt Design, A skeleton skateboarding through outer space, leaving a trail
of colorful stardust behind. The words "Skate Through the Universe" wrap around
the design in glowing, stylized font, tsdsign
output:
url: samples/1732548099417__000001000_1.jpg
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: tsdsign
license: apache-2.0
---
# tshirt-design
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `tsdsign` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/dhanushreddy29/tshirt-design/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('dhanushreddy29/tshirt-design', weight_name='tshirt-design.safetensors')
image = pipeline('Tshirt Design, A cartoon image of a person juggling paintbrushes, gears, and lightbulbs, with the phrase "Embrace the Mess" written in playful, graffiti-style font. The background is a mix of vibrant splashes of paint and abstract doodles, representing creativity in action, tsdsign').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
0wovv0/dangvantuan_mlm | 0wovv0 | 2024-11-25T15:13:05Z | 9 | 0 | null | [
"safetensors",
"roberta",
"vi",
"base_model:dangvantuan/vietnamese-embedding",
"base_model:finetune:dangvantuan/vietnamese-embedding",
"license:apache-2.0",
"region:us"
] | null | 2024-11-25T13:01:41Z | ---
license: apache-2.0
language:
- vi
base_model:
- dangvantuan/vietnamese-embedding
--- |
maxg73872/bert-base-uncased-finetuned-medmcqa-2024-11-25-T15-04-02 | maxg73872 | 2024-11-25T15:07:41Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-11-25T15:07:21Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-medmcqa-2024-11-25-T15-04-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-medmcqa-2024-11-25-T15-04-02
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3659
- Accuracy: 0.4762
- F1: 0.4853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 1.3596 | 0.9978 | 57 | 1.3659 | 0.4762 | 0.4853 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
wwwtwwwt/train3dataset | wwwtwwwt | 2024-11-25T15:07:20Z | 133 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:wwwtwwwt/fineaudio-ArtCreativity",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-25T15:07:10Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- wwwtwwwt/fineaudio-ArtCreativity
metrics:
- wer
model-index:
- name: Whisper Tiny En - ArtCreativity - Photography Tips
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fineaudio-ArtCreativity-Photography Tips
type: wwwtwwwt/fineaudio-ArtCreativity
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 34.15042216256177
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny En - ArtCreativity - Photography Tips
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the fineaudio-ArtCreativity-Photography Tips dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7095
- Wer: 34.1504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.7104 | 0.7199 | 1000 | 0.7320 | 36.1841 |
| 0.4721 | 1.4399 | 2000 | 0.7127 | 35.3579 |
| 0.3614 | 2.1598 | 3000 | 0.7118 | 34.7159 |
| 0.3472 | 2.8798 | 4000 | 0.7095 | 34.1504 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.0
|
trl-internal-testing/tiny-PaliGemmaForConditionalGeneration | trl-internal-testing | 2024-11-25T15:06:52Z | 67 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"trl",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-25T15:06:47Z | ---
library_name: transformers
tags:
- trl
---
# Tiny PaliGemmaForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-LlavaForConditionalGeneration | trl-internal-testing | 2024-11-25T15:06:40Z | 29,572 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"trl",
"conversational",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-25T15:06:37Z | ---
library_name: transformers
tags:
- trl
---
# Tiny LlavaForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-T5ForConditionalGeneration | trl-internal-testing | 2024-11-25T15:06:27Z | 949,386 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"trl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-25T15:06:24Z | ---
library_name: transformers
tags:
- trl
---
# Tiny T5ForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-Phi3ForCausalLM | trl-internal-testing | 2024-11-25T15:06:15Z | 87,468 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:06:11Z | ---
library_name: transformers
tags:
- trl
---
# Tiny Phi3ForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-OPTForCausalLM | trl-internal-testing | 2024-11-25T15:06:11Z | 87,620 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"trl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:06:08Z | ---
library_name: transformers
tags:
- trl
---
# Tiny OPTForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-MistralForCausalLM-0.2 | trl-internal-testing | 2024-11-25T15:06:08Z | 120,712 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:06:05Z | ---
library_name: transformers
tags:
- trl
---
# Tiny MistralForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-MistralForCausalLM-0.1 | trl-internal-testing | 2024-11-25T15:06:05Z | 87,489 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:06:01Z | ---
library_name: transformers
tags:
- trl
---
# Tiny MistralForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-LlamaForCausalLM-3.2 | trl-internal-testing | 2024-11-25T15:06:00Z | 197,018 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:58Z | ---
library_name: transformers
tags:
- trl
---
# Tiny LlamaForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-LlamaForCausalLM-3.1 | trl-internal-testing | 2024-11-25T15:05:57Z | 88,345 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:54Z | ---
library_name: transformers
tags:
- trl
---
# Tiny LlamaForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-GPTNeoXForCausalLM | trl-internal-testing | 2024-11-25T15:05:49Z | 87,703 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"trl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:47Z | ---
library_name: transformers
tags:
- trl
---
# Tiny GPTNeoXForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-GPT2LMHeadModel | trl-internal-testing | 2024-11-25T15:05:46Z | 99,750 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:37Z | ---
library_name: transformers
tags:
- trl
---
# Tiny GPT2LMHeadModel
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-GemmaForCausalLM | trl-internal-testing | 2024-11-25T15:05:35Z | 87,483 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:32Z | ---
library_name: transformers
tags:
- trl
---
# Tiny GemmaForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-Gemma2ForCausalLM | trl-internal-testing | 2024-11-25T15:05:31Z | 87,454 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:27Z | ---
library_name: transformers
tags:
- trl
---
# Tiny Gemma2ForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-CohereForCausalLM | trl-internal-testing | 2024-11-25T15:05:18Z | 99,044 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:15Z | ---
library_name: transformers
tags:
- trl
---
# Tiny CohereForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-BloomForCausalLM | trl-internal-testing | 2024-11-25T15:05:14Z | 100,940 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"trl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T15:05:11Z | ---
library_name: transformers
tags:
- trl
---
# Tiny BloomForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
Mozilla/distilvit | Mozilla | 2024-11-25T15:04:02Z | 133 | 17 | transformers.js | [
"transformers.js",
"pytorch",
"onnx",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"image-captioning",
"dataset:Mozilla/flickr30k-transformed-captions-gpt4o",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:quantized:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"region:us"
] | image-to-text | 2024-03-18T19:59:34Z | ---
tags:
- image-to-text
- image-captioning
- transformers.js
license: apache-2.0
metrics:
- rouge
datasets:
- Mozilla/flickr30k-transformed-captions-gpt4o
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
base_model:
- google/vit-base-patch16-224-in21k
---
# distilvit
This model is a work in progress. Fine-tuned version of those base models:
- a VIT model for the image encoder: https://huggingface.co/google/vit-base-patch16-224-in21k
- a Distilled GPT-2 model for the text decoder: https://huggingface.co/distilbert/distilgpt2
This model was trained on:
- [A debiased version of COCO 2017](https://huggingface.co/datasets/Mozilla/coco-gpt4o)
- [A debiased version of Flickr30k](https://huggingface.co/datasets/Mozilla/flickr30k-transformed-captions-gpt4o)
- [Images from pexels](https://huggingface.co/datasets/Mozilla/pexels-gpt4o)
- [DocOrNot](https://huggingface.co/datasets/Mozilla/docornot)
- [Alt Text Validation](https://huggingface.co/datasets/Mozilla/alt-text-validation)
You can find the code used to create the model here: https://github.com/mozilla/distilvit
# training results
```
{
"train/loss": 0.0781,
"train/learning_rate": 0.00003793103448275862,
"train/epoch": 2.41,
"train/global_step": 700,
"eval/loss": 0.09741172194480896,
"eval/rouge1": 60.382,
"eval/rouge2": 38.0754,
"eval/rougeL": 56.9132,
"eval/rougeLsum": 56.9214,
"eval/meteor": 0.5448683804505693,
"eval/gen_len": 9.864678265672467,
"eval/runtime": 343.0443,
"eval/samples_per_second": 10.555,
"eval/steps_per_second": 0.108,
"train/train_runtime": 10567.9413,
"train/train_samples_per_second": 27.414,
"train/train_steps_per_second": 0.274,
"train/total_flos": 9039628706135409000,
"train/train_loss": 0.09852950266429356,
}
```
|
jebish7/bge-small-en-v1.5_MNSR_6 | jebish7 | 2024-11-25T15:01:17Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:29545",
"loss:MultipleNegativesSymmetricRankingLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-25T15:01:10Z | ---
base_model: BAAI/bge-small-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:29545
- loss:MultipleNegativesSymmetricRankingLoss
widget:
- source_sentence: In terms of audited accounts submission for an Applicant, could
you clarify the scenarios in which the Regulator might agree that a reviewed pro
forma statement of financial position is not needed, and what factors would be
considered in making that determination?
sentences:
- "DocumentID: 1 | PassageID: 4.2.1.(3) | Passage: Where the regulator in another\
\ jurisdiction does not permit the implementation of policies, procedures, systems\
\ and controls consistent with these Rules, the Relevant Person must:\n(a)\tinform\
\ the Regulator in writing immediately; and\n(b)\tapply appropriate additional\
\ measures to manage the money laundering risks posed by the relevant branch or\
\ subsidiary."
- "DocumentID: 11 | PassageID: 2.3.15.(4) | Passage: The Applicant must submit to\
\ the Regulator the following records, as applicable:\n(a)\tAudited accounts,\
\ for the purposes of this Rule and Rule 2.3.2(1), for the last three full financial\
\ years, noting that:\n(i)\tif the Applicant applies for admission less than ninety\
\ days after the end of its last financial year, unless the Applicant has audited\
\ accounts for its latest full financial year, the accounts may be for the three\
\ years to the end of the previous financial year, but must also include audited\
\ or reviewed accounts for its most recent semi-annual financial reporting period;\
\ and\n(ii)\tif the Applicant applies for admission more than six months and seventy-five\
\ days after the end of its last financial year, audited or reviewed accounts\
\ for its most recent semi-annual financial reporting period (or longer period\
\ if available).\n(b)\tUnless the Regulator agrees it is not needed, a reviewed\
\ pro forma statement of financial position. The review must be conducted by an\
\ accredited professional auditor of the company or an independent accountant."
- 'DocumentID: 36 | PassageID: D.1.3. | Passage: Principle 1 β Oversight and responsibility
of climate-related financial risk exposures.Certain functions related to the management
of climate-related financial risks may be delegated, but, as with other risks,
the board is ultimately responsible and accountable for monitoring, managing and
overseeing climate-related risks for the financial firm.
'
- source_sentence: A financial institution is interested in multiple designations,
including the ADGM Green Fund and ADGM Green Bond. For each application, what
fee will the institution incur?
sentences:
- 'DocumentID: 31 | PassageID: 63) | Passage: INITIAL DISCLOSURE OF MATERIAL ESTIMATES.
Disclosure of material estimates of Contingent Resources
Section 2.3 of the PRMS Guidelines states that Contingent Resources may be assigned
for Petroleum Projects that are dependent on βtechnology under developmentβ, and
further recommended that a number of guidelines are followed in order to distinguish
these estimates from those that should be classified as Unrecoverable Petroleum. By
way of Rule 12.10.1(3), the FSRA fully supports and requires compliance with what
is set out in the PRMS Guidelines.
'
- 'DocumentID: 19 | PassageID: 40) | Passage: REGULATORY REQUIREMENTS FOR AUTHORISED
PERSONS ENGAGED IN REGULATED ACTIVITIES IN RELATION TO VIRTUAL ASSETS
Anti-Money Laundering and Countering Financing of Terrorism
On 21 June 2019, FATF released a revised Guidance for a Risk-Based Approach (RBA)
for VAs and VASPs, as well as an Interpretative Note for Recommendation 15. This
built upon previous FATF statements by clarifying a RBA for Anti-Money Laundering
and Countering the Financing of Terrorism (βAML/CFTβ) purposes. The basic principle
underlying the FATF Guidelines is that VASPs are expected to βidentify, assess,
and take effective action to mitigate their ML/TF risksβ with respect to VAs.
'
- "DocumentID: 4 | PassageID: 10.1.1 | Passage: A Person applying to the Regulator\
\ for any of the following designations:\n(a)\tADGM Green Fund;\n(b)\tADGM Climate\
\ Transition Fund;\n(c)\tADGM Green Portfolio;\n(d)\tADGM Climate Transition Portfolio;\n\
(e)\tADGM Green Bond; or\n(f)\tADGM Sustainability Linked Bond\nmust pay to the\
\ Regulator an application fee of $2,000."
- source_sentence: How does the ADGM expect Authorised Persons to incorporate the
eligibility of collateral types into their overall risk management framework,
particularly concerning Islamic finance principles?
sentences:
- 'DocumentID: 17 | PassageID: Schedule 1.Part 2.Chapter 5.42.(2) | Passage: In
determining for the purposes of sub-paragraph β(1)β(b) whether Deposits are accepted
only on particular occasions, regard is to be had to the frequency of those occasions
and to any characteristics distinguishing them from each other.'
- "DocumentID: 9 | PassageID: 6.8.5 | Passage: \n(a)\tA Fund Manager of an Islamic\
\ REIT may obtain financing either directly or through its Special Purpose Vehicle\
\ up to 65% of the total gross asset value of the Fund provided that such financing\
\ is provided in a Shari'a-compliant manner.\n(b)\tUpon becoming aware that the\
\ borrowing limit set out in 6.8.5(a) has been exceeded, the Fund Manager shall:\n\
(c)\timmediately inform Unitholders and the Regulator of the details of the breach\
\ and the proposed remedial action;\n(d)\tuse its best endeavours to reduce the\
\ excess borrowings;\n(e)\tnot permit the Fund to engage in additional borrowing;\
\ and\n(f)\tinform Unitholders and the Regulator on a regular basis as to the\
\ progress of the remedial action."
- 'DocumentID: 9 | PassageID: 5.1.1.Guidance.(ii) | Passage: The prudential Category
for Islamic Financial Institutions and other Authorised Persons (acting through
an Islamic Window) undertaking the Regulated Activity of Managing PSIAs (which
may be either a Restricted PSIA or an Unrestricted PSIA) is determined in accordance
with PRU Rule 1.3. An Authorised Person which Manages PSIAs (whether as an Islamic
Financial Institution or through an Islamic Window) must comply with the requirements
in PRU in relation to specific prudential requirements relating to Trading Book
and Non-Trading Book activities, including Credit Risk, Market Risk, Liquidity
Risk and Group Risk.'
- source_sentence: Can you please detail the specific Anti-Money Laundering (AML)
and Countering Financing of Terrorism (CFT) measures and controls that our firm
must have in place when dealing with Spot Commodities as per the FSRA's requirements?
sentences:
- 'DocumentID: 34 | PassageID: 65) | Passage: REGULATORY REQUIREMENTS - SPOT COMMODITY
ACTIVITIES
Sanctions
Pursuant to AML Rule 11.2.1(1), an Authorised Person must have arrangements in
place to ensure that only Spot Commodities that are not subject to sanctions or
associated with an entity in the supply chain that is itself subject to a sanction,
are used as part of its Regulated Activities, or utilised as part of a delivery
and/or storage facility operated by itself (or by any third parties it uses). In
demonstrating compliance with the Rule, an Authorised Person must have powers
to resolve any breach in a timely fashion, such as taking emergency action itself
or by compelling the delivery and/or storage facility to take appropriate action. The
FSRA expects this to include the Authorised Person having the ability to sanction
a Member, market participant or the delivery and/or storage facility for acts
or omissions that compromise compliance with applicable sanctions.
'
- "DocumentID: 18 | PassageID: 3.2 | Passage: Financial Services Permissions. VC\
\ Managers operating in ADGM require a Financial Services Permission (βFSPβ) to\
\ undertake any Regulated Activity pertaining to VC Funds and/or co-investments\
\ by third parties in VC Funds. The Regulated Activities covered by the FSP will\
\ be dependent on the VC Managersβ investment strategy and business model.\n(a)\t\
Managing a Collective Investment Fund: this includes carrying out fund management\
\ activities in respect of a VC Fund.\n(b)\tAdvising on Investments or Credit\
\ : for VC Managers these activities will be restricted to activities related\
\ to co-investment alongside a VC Fund which the VC Manager manages, such as recommending\
\ that a client invest in an investee company alongside the VC Fund and on the\
\ strategy and structure required to make the investment.\n(c)\tArranging Deals\
\ in Investments: VC Managers may also wish to make arrangements to facilitate\
\ co-investments in the investee company.\nAuthorisation fees and supervision\
\ fees for a VC Manager are capped at USD 10,000 regardless of whether one or\
\ both of the additional Regulated Activities in b) and c) above in relation to\
\ co-investments are included in its FSP. The FSP will include restrictions appropriate\
\ to the business model of a VC Manager."
- 'DocumentID: 24 | PassageID: 3.9 | Passage: Principle 2 β High Standards for Authorisation.
This discerning approach is shown by the FSRAβs power to only permit VAs that
it deems βacceptableβ, as determined by risk factors such as security and traceability,
in order to prevent the build-up of risk from illiquid or immature assets. Additionally,
we do not permit stablecoins based on the algorithmic model of valuation to the
underlying fiat currency.'
- source_sentence: What are the common scenarios or instances where assets and liabilities
are not covered by the bases of accounting in Rule 5.3.2, and how should an Insurer
address these in their reporting?
sentences:
- 'DocumentID: 1 | PassageID: 14.4.1.Guidance.1. | Passage: Relevant Persons are
reminded that in accordance with Federal AML Legislation, Relevant Persons or
any of their Employees must not tip off any Person, that is, inform any Person
that he is being scrutinised, or investigated by any other competent authority,
for possible involvement in suspicious Transactions or activity related to money
laundering or terrorist financing.'
- "DocumentID: 12 | PassageID: 5.3.1.Guidance | Passage: \nThe exceptions provided\
\ in this Chapter relate to the following:\na.\tspecific Rules in respect of certain\
\ assets and liabilities, intended to achieve a regulatory objective not achieved\
\ by application of either or both of the bases of accounting set out in Rule\
\ β5.3.2;\nb.\tassets and liabilities that are not dealt with in either or both\
\ of the bases of accounting set out in Rule β5.3.2; and\nc.\tthe overriding power\
\ of the Regulator, set out in Rule β5.1.6, to require an Insurer to adopt a particular\
\ measurement for a specific asset or liability."
- 'DocumentID: 1 | PassageID: 6.2.1.Guidance.2. | Passage: The risk assessment under
Rule β6.2.1(c) should identify actions to mitigate risks associated with undertaking
NFTF business generally, and the use of eKYC specifically. This is because distinct
risks are often likely to arise where business is conducted entirely in an NFTF
manner, compared to when the business relationship includes a mix of face-to-face
and NFTF interactions. The assessment should make reference to risk mitigation
measures recommended by the Regulator, a competent authority of the U.A.E., FATF,
and other relevant bodies.
'
---
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("jebish7/bge-small-en-v1.5_MNSR_10")
# Run inference
sentences = [
'What are the common scenarios or instances where assets and liabilities are not covered by the bases of accounting in Rule 5.3.2, and how should an Insurer address these in their reporting?',
'DocumentID: 12 | PassageID: 5.3.1.Guidance | Passage: \nThe exceptions provided in this Chapter relate to the following:\na.\tspecific Rules in respect of certain assets and liabilities, intended to achieve a regulatory objective not achieved by application of either or both of the bases of accounting set out in Rule \u200e5.3.2;\nb.\tassets and liabilities that are not dealt with in either or both of the bases of accounting set out in Rule \u200e5.3.2; and\nc.\tthe overriding power of the Regulator, set out in Rule \u200e5.1.6, to require an Insurer to adopt a particular measurement for a specific asset or liability.',
'DocumentID: 1 | PassageID: 14.4.1.Guidance.1. | Passage: Relevant Persons are reminded that in accordance with Federal AML Legislation, Relevant Persons or any of their Employees must not tip off any Person, that is, inform any Person that he is being scrutinised, or investigated by any other competent authority, for possible involvement in suspicious Transactions or activity related to money laundering or terrorist financing.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 29,545 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 34.95 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 132.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>If a financial institution offers Money Remittance as one of its services, under what circumstances is it deemed to be holding Relevant Money and therefore subject to regulatory compliance (a)?</code> | <code>DocumentID: 13 | PassageID: 3.7.1.Guidance.1. | Passage: An Authorised Person is considered to be holding Relevant Money and subject to (a) where it offers Payment Services alongside currency exchange or Money Remittance.<br></code> |
| <code>What are the consequences for a Recognised Body or Authorised Person if they fail to comply with ADGM's requirements regarding severance payments?</code> | <code>DocumentID: 7 | PassageID: APP1.A1.2.Guidance.9. | Passage: Severance payments. Where an Authorised Person or Recognised Body provides discretionary payouts on termination of employment ("severance payments", also called "golden parachutes"), such payment should generally be subject to appropriate limits or shareholder approval. In any case, such payouts should be aligned with the Authorised Person or Recognised Body's overall financial condition and performance over an appropriate time horizon and should not be payable in the case of failure or threatened failure of the Authorised Person or Recognised Body, particularly to an individual whose actions may have contributed to the failure or potential failure of the Authorised Person or Recognised Body.<br></code> |
| <code>If a Public Fund is structured as an Investment Trust, to whom should the Fund Manager report the review findings regarding delegated Regulated Activities or outsourced functions?</code> | <code>DocumentID: 6 | PassageID: PART 5.12.12.8.(1) | Passage: A Fund Manager or the Trustee of a Public Fund, which has delegated any Regulated Activities or outsourced any functions, must conduct a review of the carrying out of the relevant activities or functions by the Service Provider and present the findings of the review to either:<br>(a) the Fund's Governing Body every 6 months at the Fund's board meeting; or<br>(b) in the case of a Fund structured as an Investment Trust, to the Trustee.</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.2165 | 100 | 1.4357 |
| 0.4329 | 200 | 0.9589 |
| 0.6494 | 300 | 0.9193 |
| 0.8658 | 400 | 0.8542 |
| 1.0823 | 500 | 0.8643 |
| 1.2987 | 600 | 0.8135 |
| 1.5152 | 700 | 0.7658 |
| 1.7316 | 800 | 0.7454 |
| 1.9481 | 900 | 0.7477 |
| 2.1645 | 1000 | 0.7586 |
| 2.3810 | 1100 | 0.6978 |
| 2.5974 | 1200 | 0.7152 |
| 2.8139 | 1300 | 0.6866 |
| 0.2165 | 100 | 0.7049 |
| 0.4329 | 200 | 0.6651 |
| 0.6494 | 300 | 0.6942 |
| 0.8658 | 400 | 0.6695 |
| 1.0823 | 500 | 0.7048 |
| 1.2987 | 600 | 0.636 |
| 1.5152 | 700 | 0.5984 |
| 1.7316 | 800 | 0.6001 |
| 1.9481 | 900 | 0.6096 |
| 2.1645 | 1000 | 0.6313 |
| 2.3810 | 1100 | 0.5437 |
| 2.5974 | 1200 | 0.5716 |
| 2.8139 | 1300 | 0.5634 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
tryonlabs/FLUX.1-dev-LoRA-Lehenga-Generator | tryonlabs | 2024-11-25T15:01:03Z | 152 | 12 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-25T08:00:20Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: lehenga
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
widget:
- text: >-
A flat-lay image of A lehenga with a traditional style and a fitted
waistline is elegantly crafted from stretchy silk material, ensuring a
comfortable and flattering fit. The long hemline adds a touch of grace and
sophistication to the ensemble. Adorned in a solid blue color, it features a
sleeveless design that complements its sweetheart neckline. The solid
pattern and the luxurious silk fabric together create a timeless and chic
look that is perfect for special occasions.
output:
url: images/example_w40syzp04.png
- text: >-
A lehenga with a vibrant yellow color and an embroidered pattern showcases a
traditional style. It features a fitted silhouette that gracefully flows
with its long hemline. Crafted from a luxurious silk material, this lehenga
offers a stretchable fabric elasticity for comfort and movement. The full
sleeve length adds elegance, while the flared waistline accentuates the
wearer's figure. The V-neck design further complements the overall
aesthetic, making it a stunning choice for special occasions.
output:
url: images/example_bjilw3a2b.png
- text: >-
A lehenga with a traditional style and a fitted fit features a long hemline
that elegantly drapes. The fabric is a comfortable cotton blend that
includes stretchable properties, ensuring a flattering fit. Adorned with a
full sleeve length, it adds a touch of grace to the ensemble. The lehenga
boasts a beautiful floral pattern, with white as the dominant color, and a
V-neckline that adds a subtle yet sophisticated detail. The high waistline
accentuates the figure, making it a chic choice for any occasion.
output:
url: images/example_infawkf33.png
- text: >-
A traditional silk lehenga with a fitted waistline features a long, flowing
hemline and solid pattern. It has long sleeves for sophistication and
stretchy fabric for comfort. A V-neck adds modern flair, balancing tradition
and contemporary fashion.
output:
url: images/example_3h6g4bd1i.png
- text: >-
A flat-lay image of A traditional silk lehenga with a fitted waistline
features a long, flowing hemline and solid pattern. It has long sleeves for
sophistication and stretchy fabric for comfort. A V-neck adds modern flair,
balancing tradition and contemporary fashion.
output:
url: images/example_s65mofbiz.png
- text: Blue & Beige Solid Ready to Wear Lehenga with Blouse
output:
url: images/example_f6gv4oen5.png
---
# FLUX.1-dev LoRA Lehenga Generator
<Gallery />
## Model description
FLUX.1-dev LoRA Lehenga Generator can create an image of Lehenga by detailing the color, pattern, fit, style, material, and type.
## Repository used
AI Toolkit by Ostris
https://github.com/ostris/ai-toolkit
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
import random
from diffusers import FluxPipeline
import torch
MAX_SEED=42
PRE_TRAINED_MODEL = "black-forest-labs/FLUX.1-dev"
FINE_TUNED_MODEL = "tryonlabs/FLUX.1-dev-LoRA-Lehenga-Generator"
# Load Flux.1-dev
pipe = FluxPipeline.from_pretrained(PRE_TRAINED_MODEL, torch_dtype=torch.float16).to("cuda")
# Load fine-tuned model
pipe.load_lora_weights(FINE_TUNED_MODEL, adapter_name="default", weight_name="lehenga-generator.safetensors")
seed = random.randint(0, MAX_SEED)
generator = torch.Generator().manual_seed(seed)
prompt = "A flat lay image of a lehenga with a pink solid pattern and traditional style is elegantly fitted, showcasing a floor-length hemline that flows beautifully. Crafted from luxurious silk, this garment features a stretchy fabric that ensures comfort and a flattering fit. The lehenga is sleeveless, allowing for unrestricted movement, and it is adorned with a sweetheart neckline that adds a touch of femininity. The fitted waistline accentuates the wearer's silhouette, making it a stunning choice for special occasions."
image = pipe(prompt, height=1024, width=1024, num_images_per_prompt=1, generator=generator, guidance_scale=4.5, num_inference_steps=40).images[0]
image.save("gen_image.jpg")
```
## Download model
Weights for this model are available in Safetensors format.
[Download](/tryonlabs/FLUX.1-dev-LoRA-Outfit-Generator/tree/main) them in the Files & versions tab.
|
wltjr1007/testsss | wltjr1007 | 2024-11-25T15:00:10Z | 188 | 0 | transformers | [
"transformers",
"safetensors",
"conditional-unet",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-11-25T13:31:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katuni4ka/textual_inversion_cat | katuni4ka | 2024-11-25T14:54:34Z | 5,724 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:hf-internal-testing/tiny-stable-diffusion-torch",
"base_model:adapter:hf-internal-testing/tiny-stable-diffusion-torch",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-11-25T14:39:43Z | ---
base_model: hf-internal-testing/tiny-stable-diffusion-torch
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - katuni4ka/textual_inversion_cat
These are textual inversion adaption weights for hf-internal-testing/tiny-stable-diffusion-torch. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
stevenbucaille/superglue_outdoor | stevenbucaille | 2024-11-25T14:54:16Z | 65 | 1 | transformers | [
"transformers",
"safetensors",
"superglue",
"feature-extraction",
"arxiv:1911.11763",
"license:other",
"region:us"
] | feature-extraction | 2024-03-30T22:27:03Z | ---
library_name: transformers
inference: false
license: other
---
# SuperGlue
The SuperGlue model was proposed
in [SuperGlue: Learning Feature Matching with Graph Neural Networks](https://arxiv.org/abs/1911.11763) by Paul-Edouard Sarlin, Daniel
DeTone, Tomasz Malisiewicz and Andrew Rabinovich.
This model consists of matching two sets of interest points detected in an image. Paired with the
[SuperPoint model](https://huggingface.co/magic-leap-community/superpoint), it can be used to match two images and
estimate the pose between them. This model is useful for tasks such as image matching, homography estimation, etc.
The abstract from the paper is the following:
*This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences
and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs
are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling
SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics,
our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image
pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in
challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and
can be readily integrated into modern SfM or SLAM systems. The code and trained weights are publicly available at this [URL](https://github.com/magicleap/SuperGluePretrainedNetwork).*
<img src="https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/2I8QDRNoMhQCuL236CvdN.png" alt="drawing" width="500"/>
<!--  -->
This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
The original code can be found [here](https://github.com/magicleap/SuperGluePretrainedNetwork).
## Model Details
### Model Description
SuperGlue is a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points.
It introduces a flexible context aggregation mechanism based on attention, enabling it to reason about the underlying 3D scene and feature
assignments. The architecture consists of two main components: the Attentional Graph Neural Network and the Optimal Matching Layer.
<img src="https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/zZGjSWQU2na5aPFRak5kp.png" alt="drawing" width="1000"/>
<!--  -->
The Attentional Graph Neural Network uses a Keypoint Encoder to map keypoint positions and visual descriptors.
It employs self- and cross-attention layers to create powerful representations. The Optimal Matching Layer creates a
score matrix, augments it with dustbins, and finds the optimal partial assignment using the Sinkhorn algorithm.
- **Developed by:** MagicLeap
- **Model type:** Image Matching
- **License:** ACADEMIC OR NON-PROFIT ORGANIZATION NONCOMMERCIAL RESEARCH USE ONLY
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/magicleap/SuperGluePretrainedNetwork
- **Paper:** https://arxiv.org/pdf/1911.11763
- **Demo:** https://psarlin.com/superglue/
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
SuperGlue is designed for feature matching and pose estimation tasks in computer vision. It can be applied to a variety of multiple-view
geometry problems and can handle challenging real-world indoor and outdoor environments. However, it may not perform well on tasks that
require different types of visual understanding, such as object detection or image classification.
## How to Get Started with the Model
Here is a quick example of using the model. Since this model is an image matching model, it requires pairs of images to be matched:
```python
from transformers import AutoImageProcessor, AutoModel
import torch
from PIL import Image
import requests
url = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg"
im1 = Image.open(requests.get(url, stream=True).raw)
url = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg"
im2 = Image.open(requests.get(url, stream=True).raw)
images = [im1, im2]
processor = AutoImageProcessor.from_pretrained("stevenbucaille/superglue_outdoor")
model = AutoModel.from_pretrained("stevenbucaille/superglue_outdoor")
inputs = processor(images, return_tensors="pt")
outputs = model(**inputs)
```
The outputs contain the list of keypoints detected by the keypoint detector as well as the list of matches with their corresponding matching scores.
Due to the nature of SuperGlue, to output a dynamic number of matches, you will need to use the mask attribute to retrieve the respective information:
```python
from transformers import AutoImageProcessor, AutoModel
import torch
from PIL import Image
import requests
url_image_1 = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg"
image_1 = Image.open(requests.get(url_image_1, stream=True).raw)
url_image_2 = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg"
image_2 = Image.open(requests.get(url_image_2, stream=True).raw)
images = [image_1, image_2]
processor = AutoImageProcessor.from_pretrained("stevenbucaille/superglue_indoor")
model = AutoModel.from_pretrained("stevenbucaille/superglue_indoor")
inputs = processor(images, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# Get the respective image masks
image0_mask, image1_mask = outputs_mask[0]
image0_indices = torch.nonzero(image0_mask).squeeze()
image1_indices = torch.nonzero(image1_mask).squeeze()
image0_matches = outputs.matches[0, 0][image0_indices]
image1_matches = outputs.matches[0, 1][image1_indices]
image0_matching_scores = outputs.matching_scores[0, 0][image0_indices]
image1_matching_scores = outputs.matching_scores[0, 1][image1_indices]
```
You can use the `post_process_keypoint_matching` method from the `SuperGlueImageProcessor` to get the keypoints and matches in a more readable format:
```python
image_sizes = [(image.height, image.width) for image in images]
outputs = processor.post_process_keypoint_matching(outputs, image_sizes)
for i, output in enumerate(outputs):
print("For the image pair", i)
for keypoint0, keypoint1, matching_score in zip(output["keypoints0"], output["keypoints1"],
output["matching_scores"]):
print(
f"Keypoint at coordinate {keypoint0.numpy()} in the first image matches with keypoint at coordinate {keypoint1.numpy()} in the second image with a score of {matching_score}."
)
```
From the outputs, you can visualize the matches between the two images using the following code:
```python
import matplotlib.pyplot as plt
import numpy as np
# Create side by side image
merged_image = np.zeros((max(image1.height, image2.height), image1.width + image2.width, 3))
merged_image[: image1.height, : image1.width] = np.array(image1) / 255.0
merged_image[: image2.height, image1.width :] = np.array(image2) / 255.0
plt.imshow(merged_image)
plt.axis("off")
# Retrieve the keypoints and matches
output = outputs[0]
keypoints0 = output["keypoints0"]
keypoints1 = output["keypoints1"]
matching_scores = output["matching_scores"]
keypoints0_x, keypoints0_y = keypoints0[:, 0].numpy(), keypoints0[:, 1].numpy()
keypoints1_x, keypoints1_y = keypoints1[:, 0].numpy(), keypoints1[:, 1].numpy()
# Plot the matches
for keypoint0_x, keypoint0_y, keypoint1_x, keypoint1_y, matching_score in zip(
keypoints0_x, keypoints0_y, keypoints1_x, keypoints1_y, matching_scores
):
plt.plot(
[keypoint0_x, keypoint1_x + image1.width],
[keypoint0_y, keypoint1_y],
color=plt.get_cmap("RdYlGn")(matching_score.item()),
alpha=0.9,
linewidth=0.5,
)
plt.scatter(keypoint0_x, keypoint0_y, c="black", s=2)
plt.scatter(keypoint1_x + image1.width, keypoint1_y, c="black", s=2)
# Save the plot
plt.savefig("matched_image.png", dpi=300, bbox_inches='tight')
plt.close()
```

## Training Details
### Training Data
SuperGlue is trained on large annotated datasets for pose estimation, enabling it to learn priors for pose estimation and reason about the 3D scene.
The training data consists of image pairs with ground truth correspondences and unmatched keypoints derived from ground truth poses and depth maps.
### Training Procedure
SuperGlue is trained in a supervised manner using ground truth matches and unmatched keypoints. The loss function maximizes
the negative log-likelihood of the assignment matrix, aiming to simultaneously maximize precision and recall.
#### Training Hyperparameters
- **Training regime:** fp32
#### Speeds, Sizes, Times
SuperGlue is designed to be efficient and runs in real-time on a modern GPU. A forward pass takes approximately 69 milliseconds (15 FPS) for an indoor image pair.
The model has 12 million parameters, making it relatively compact compared to some other deep learning models.
The inference speed of SuperGlue is suitable for real-time applications and can be readily integrated into
modern Simultaneous Localization and Mapping (SLAM) or Structure-from-Motion (SfM) systems.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{sarlin2020superglue,
title={Superglue: Learning feature matching with graph neural networks},
author={Sarlin, Paul-Edouard and DeTone, Daniel and Malisiewicz, Tomasz and Rabinovich, Andrew},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={4938--4947},
year={2020}
}
```
## Model Card Authors
[Steven Bucaille](https://github.com/sbucaille) |
artyomboyko/distilhubert-finetuned-gtzan | artyomboyko | 2024-11-25T14:52:44Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-23T19:30:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
base_model: ntu-spml/distilhubert
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4989
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2359 | 1.0 | 112 | 0.4776 | 0.87 |
| 0.1235 | 2.0 | 225 | 0.4872 | 0.84 |
| 0.2083 | 3.0 | 337 | 0.4910 | 0.85 |
| 0.19 | 4.0 | 450 | 0.4953 | 0.87 |
| 0.1128 | 5.0 | 562 | 0.4801 | 0.87 |
| 0.1644 | 6.0 | 675 | 0.4703 | 0.87 |
| 0.0699 | 7.0 | 787 | 0.4692 | 0.85 |
| 0.1082 | 8.0 | 900 | 0.4708 | 0.87 |
| 0.0898 | 9.0 | 1012 | 0.4347 | 0.89 |
| 0.1071 | 10.0 | 1125 | 0.5310 | 0.85 |
| 0.0727 | 11.0 | 1237 | 0.4765 | 0.87 |
| 0.0338 | 12.0 | 1350 | 0.4859 | 0.87 |
| 0.0233 | 13.0 | 1462 | 0.4713 | 0.87 |
| 0.0248 | 14.0 | 1575 | 0.5068 | 0.88 |
| 0.0263 | 15.0 | 1687 | 0.4874 | 0.88 |
| 0.0185 | 16.0 | 1800 | 0.4925 | 0.88 |
| 0.0142 | 17.0 | 1912 | 0.4766 | 0.89 |
| 0.0178 | 18.0 | 2025 | 0.4850 | 0.89 |
| 0.0153 | 19.0 | 2137 | 0.4660 | 0.88 |
| 0.012 | 20.0 | 2250 | 0.4831 | 0.88 |
| 0.0113 | 21.0 | 2362 | 0.4965 | 0.89 |
| 0.0106 | 22.0 | 2475 | 0.5098 | 0.89 |
| 0.011 | 23.0 | 2587 | 0.5093 | 0.89 |
| 0.009 | 24.0 | 2700 | 0.4989 | 0.91 |
| 0.0094 | 25.0 | 2812 | 0.4999 | 0.89 |
| 0.0441 | 26.0 | 2925 | 0.5197 | 0.88 |
| 0.0079 | 27.0 | 3037 | 0.5115 | 0.89 |
| 0.0072 | 28.0 | 3150 | 0.5136 | 0.88 |
| 0.007 | 29.0 | 3262 | 0.5394 | 0.88 |
| 0.0068 | 30.0 | 3375 | 0.5374 | 0.88 |
| 0.0061 | 31.0 | 3487 | 0.5221 | 0.88 |
| 0.0533 | 32.0 | 3600 | 0.5775 | 0.87 |
| 0.0055 | 33.0 | 3712 | 0.5632 | 0.88 |
| 0.0059 | 34.0 | 3825 | 0.5584 | 0.87 |
| 0.0051 | 35.0 | 3937 | 0.5444 | 0.88 |
| 0.0051 | 36.0 | 4050 | 0.5373 | 0.88 |
| 0.0045 | 37.0 | 4162 | 0.5723 | 0.87 |
| 0.0058 | 38.0 | 4275 | 0.5773 | 0.87 |
| 0.0043 | 39.0 | 4387 | 0.5455 | 0.88 |
| 0.0044 | 40.0 | 4500 | 0.5686 | 0.88 |
| 0.004 | 41.0 | 4612 | 0.5622 | 0.87 |
| 0.004 | 42.0 | 4725 | 0.5797 | 0.88 |
| 0.0042 | 43.0 | 4837 | 0.5621 | 0.88 |
| 0.0037 | 44.0 | 4950 | 0.5734 | 0.87 |
| 0.0048 | 45.0 | 5062 | 0.5774 | 0.88 |
| 0.0039 | 46.0 | 5175 | 0.5901 | 0.87 |
| 0.0043 | 47.0 | 5287 | 0.5743 | 0.88 |
| 0.0043 | 48.0 | 5400 | 0.5757 | 0.87 |
| 0.0037 | 49.0 | 5512 | 0.5710 | 0.88 |
| 0.0036 | 49.78 | 5600 | 0.5759 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF | mradermacher | 2024-11-25T14:49:49Z | 202 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/Mistral-Nemo-abliterated-Nemo-Pro-v2",
"base_model:quantized:win10/Mistral-Nemo-abliterated-Nemo-Pro-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T13:18:56Z | ---
base_model: win10/Mistral-Nemo-abliterated-Nemo-Pro-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/win10/Mistral-Nemo-abliterated-Nemo-Pro-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-abliterated-Nemo-Pro-v2-GGUF/resolve/main/Mistral-Nemo-abliterated-Nemo-Pro-v2.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
litmudoc/Qwen2.5-Coder-32B-Instruct-Q4-mlx | litmudoc | 2024-11-25T14:36:28Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-11-25T14:35:34Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
- mlx-my-repo
---
# litmudoc/Qwen2.5-Coder-32B-Instruct-Q4-mlx
The Model [litmudoc/Qwen2.5-Coder-32B-Instruct-Q4-mlx](https://huggingface.co/litmudoc/Qwen2.5-Coder-32B-Instruct-Q4-mlx) was converted to MLX format from [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("litmudoc/Qwen2.5-Coder-32B-Instruct-Q4-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
beomi/Solar-Ko-Recovery-11B-Q8_0-GGUF | beomi | 2024-11-25T14:35:54Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"solar",
"mistral",
"pytorch",
"solar-ko",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"en",
"base_model:beomi/Solar-Ko-Recovery-11B",
"base_model:quantized:beomi/Solar-Ko-Recovery-11B",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-11-25T14:35:07Z | ---
language:
- ko
- en
pipeline_tag: text-generation
inference: false
tags:
- solar
- mistral
- pytorch
- solar-ko
- llama-cpp
- gguf-my-repo
library_name: transformers
license: apache-2.0
base_model: beomi/Solar-Ko-Recovery-11B
---
# beomi/Solar-Ko-Recovery-11B-Q8_0-GGUF
This model was converted to GGUF format from [`beomi/Solar-Ko-Recovery-11B`](https://huggingface.co/beomi/Solar-Ko-Recovery-11B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/Solar-Ko-Recovery-11B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo beomi/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo beomi/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo beomi/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo beomi/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -c 2048
```
|
mradermacher/Teleut-7b-GGUF | mradermacher | 2024-11-25T14:33:08Z | 28 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allenai/tulu-3-sft-mixture",
"base_model:allura-org/Teleut-7b",
"base_model:quantized:allura-org/Teleut-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T12:04:21Z | ---
base_model: allura-org/Teleut-7b
datasets:
- allenai/tulu-3-sft-mixture
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/allura-org/Teleut-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Teleut-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Teleut-7b-GGUF/resolve/main/Teleut-7b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bartowski/cybertron-v4-qw7B-UNAMGS-GGUF | bartowski | 2024-11-25T14:27:00Z | 557 | 3 | null | [
"gguf",
"generated_from_trainer",
"text-generation",
"en",
"dataset:Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1",
"base_model:fblgit/cybertron-v4-qw7B-UNAMGS",
"base_model:quantized:fblgit/cybertron-v4-qw7B-UNAMGS",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-21T16:12:13Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
datasets:
- Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1
base_model: fblgit/cybertron-v4-qw7B-UNAMGS
license_name: qwen
tags:
- generated_from_trainer
license: other
language:
- en
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
model-index:
- name: cybertron-v4-qw7B-UNAMGS
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 60.84
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 37.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 29.91
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.85
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.69
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.89
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
name: Open LLM Leaderboard
---
## Llamacpp imatrix Quantizations of cybertron-v4-qw7B-UNAMGS
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4132">b4132</a> for quantization.
Original model: https://huggingface.co/fblgit/cybertron-v4-qw7B-UNAMGS
# cybertron-v4-qw7B-UNAMGS
**UNA IS BACK** Cybertron v4 UNA-MGS, Based on the amazing Qwen2.5 7B
**SCORING #1 7-8B LLM WITH NO CONTAMINATION 21.11.2024 with avg. 31.82**

This special edition went thru UNA at MLP layers just like [miniclaus-1.5B](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS)
Here we use our novel approach called `MGS`. Its up to you to figure out what it means. On top of that we used `UNA: Uniform Neural Alignment`
Cybertron V4 went thru SFT with `MGS & UNA` over `Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1` dataset.
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [cybertron-v4-qw7B-UNAMGS-f16.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-f16.gguf) | f16 | 15.24GB | false | Full F16 weights. |
| [cybertron-v4-qw7B-UNAMGS-Q8_0.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. |
| [cybertron-v4-qw7B-UNAMGS-Q6_K_L.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q6_K.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q5_K_L.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q5_K_M.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q5_K_S.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q4_K_L.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q4_K_M.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q3_K_XL.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [cybertron-v4-qw7B-UNAMGS-Q4_K_S.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q4_0.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, generally not worth using over similarly sized formats |
| [cybertron-v4-qw7B-UNAMGS-Q4_0_8_8.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.43GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [cybertron-v4-qw7B-UNAMGS-Q4_0_4_8.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.43GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [cybertron-v4-qw7B-UNAMGS-Q4_0_4_4.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.43GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [cybertron-v4-qw7B-UNAMGS-IQ4_XS.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [cybertron-v4-qw7B-UNAMGS-Q3_K_L.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
| [cybertron-v4-qw7B-UNAMGS-Q3_K_M.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [cybertron-v4-qw7B-UNAMGS-IQ3_M.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [cybertron-v4-qw7B-UNAMGS-Q2_K_L.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [cybertron-v4-qw7B-UNAMGS-Q3_K_S.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
| [cybertron-v4-qw7B-UNAMGS-IQ3_XS.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [cybertron-v4-qw7B-UNAMGS-Q2_K.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. |
| [cybertron-v4-qw7B-UNAMGS-IQ2_M.gguf](https://huggingface.co/bartowski/cybertron-v4-qw7B-UNAMGS-GGUF/blob/main/cybertron-v4-qw7B-UNAMGS-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/cybertron-v4-qw7B-UNAMGS-GGUF --include "cybertron-v4-qw7B-UNAMGS-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/cybertron-v4-qw7B-UNAMGS-GGUF --include "cybertron-v4-qw7B-UNAMGS-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (cybertron-v4-qw7B-UNAMGS-Q8_0) or download them all in place (./)
</details>
## Q4_0_X_X information
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 Β± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 Β± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 Β± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 Β± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 Β± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 Β± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 Β± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 Β± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 Β± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 Β± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 Β± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 Β± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 Β± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 Β± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 Β± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 Β± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 Β± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 Β± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Maites/Homer-v0.5-Qwen2.5-7B-Q4_K_M-GGUF | Maites | 2024-11-25T14:23:36Z | 5 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:newsbang/Homer-v0.5-Qwen2.5-7B",
"base_model:quantized:newsbang/Homer-v0.5-Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T14:23:14Z | ---
license: apache-2.0
base_model: newsbang/Homer-v0.5-Qwen2.5-7B
tags:
- llama-cpp
- gguf-my-repo
---
# JulienElkaim/Homer-v0.5-Qwen2.5-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`newsbang/Homer-v0.5-Qwen2.5-7B`](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo JulienElkaim/Homer-v0.5-Qwen2.5-7B-Q4_K_M-GGUF --hf-file homer-v0.5-qwen2.5-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo JulienElkaim/Homer-v0.5-Qwen2.5-7B-Q4_K_M-GGUF --hf-file homer-v0.5-qwen2.5-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo JulienElkaim/Homer-v0.5-Qwen2.5-7B-Q4_K_M-GGUF --hf-file homer-v0.5-qwen2.5-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo JulienElkaim/Homer-v0.5-Qwen2.5-7B-Q4_K_M-GGUF --hf-file homer-v0.5-qwen2.5-7b-q4_k_m.gguf -c 2048
```
|
saliq5/Pipeline | saliq5 | 2024-11-25T14:17:15Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T14:10:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q8_0-GGUF | Triangle104 | 2024-11-25T14:13:00Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-04T22:53:28Z | ---
library_name: transformers
license: llama3.1
base_model: huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated) for more details on the model.
---
Model details:
-
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q8_0-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q8_0-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q8_0-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q8_0-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q8_0.gguf -c 2048
```
|
briannlongzhao/basquiat_textual_inversion | briannlongzhao | 2024-11-25T14:09:59Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-09T08:05:41Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - briannlongzhao/basquiat_textual_inversion
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q5_K_M-GGUF | Triangle104 | 2024-11-25T14:04:04Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-04T22:47:20Z | ---
library_name: transformers
license: llama3.1
base_model: huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated) for more details on the model.
---
Model details:
-
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-abliterated-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q5_k_m.gguf -c 2048
```
|
sna89/segformer-b0-scene-parse-150 | sna89 | 2024-11-25T14:03:55Z | 37 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-10-29T15:42:07Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2863
- Mean Iou: 0.1322
- Mean Accuracy: 0.2193
- Overall Accuracy: 0.5236
- Per Category Iou: [0.17584122398758895, 0.46366573409354317, 0.6899099859557716, 0.8850136717893229, 0.2200496783644354, 0.21159926360234527, 0.28788921058002676, 0.8057838286364055, 0.3129988790098432, 0.01191510487774606, 0.13235068324888685, 0.0, 0.2795662866085401, 0.0396786413020855, 0.0, 0.0, 0.0023832843617368595, 0.021505376344086023, 0.030850694907567956, nan, 0.0, 0.8503331373701744, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0]
- Per Category Accuracy: [0.21451742313435987, 0.8538336753640183, 0.984218271094859, 0.9792485738046007, 0.3231085757037314, 0.6344898969443319, 0.36251832390960836, 0.9197207801318928, 0.9971040015654046, 0.0165147084121796, 0.13401741293532338, nan, 0.5442280491785443, 0.03995674007998055, nan, nan, 0.002386205769387922, 0.10079726651480637, 0.03569003826688063, nan, 0.0, 0.9717276900683014, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.2263 | 1.0 | 20 | 3.5893 | 0.0845 | 0.1572 | 0.4326 | [0.09895809215096087, 0.3766523287544517, 0.4026077186261011, 0.7401129262459497, 0.07194570618215938, 0.16042202572347267, 0.2403559937369879, 0.7332509445672412, 0.16759665790517025, 0.0, 0.0, nan, 0.23127111732494465, 0.010414577064526915, nan, 0.0, 0.0030889924000980632, 0.0, 0.0, nan, 0.0, 0.22757075729428478, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.1476695574059358, 0.47749047200234535, 0.9076271868244234, 0.9933600613514799, 0.42068019576670096, 0.598727930036152, 0.531888362175643, 0.944892661708994, 0.22333431171118287, 0.0, 0.0, nan, 0.323958219997824, 0.010446096965937004, nan, nan, 0.0030889924000980632, 0.0, 0.0, nan, 0.0, 0.2331765759713358, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.8869 | 2.0 | 40 | 3.4918 | 0.0916 | 0.1573 | 0.4367 | [0.09252577101279869, 0.31475782946456565, 0.4295326761989256, 0.8197392257882057, 0.05785553150563142, 0.1740403836219482, 0.2149033864646486, 0.7299493926028674, 0.33963437276016617, 0.0, 0.0, nan, 0.09344845099035043, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.21577064937968035, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.12580474265510372, 0.6357314570507183, 0.8192354905935082, 0.9941916716013189, 0.43411577667633033, 0.5811919655581057, 0.5840710491747102, 0.9451241756699874, 0.38479600821837395, 0.0, 0.0, nan, 0.1000979218801001, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.21615720524017468, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.0216 | 3.0 | 60 | 3.2395 | 0.0969 | 0.1700 | 0.4619 | [0.1664985131690739, 0.20999590917060057, 0.4289943594009393, 0.815095824349939, 0.09465792603500028, 0.15100404742383425, 0.23608409199104305, 0.7688087904309501, 0.44141398105519397, 0.0, 0.0, nan, 0.09315274642588413, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.2777312098351495, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2287043115395394, 0.2528290823805336, 0.9409164873779907, 0.9921207355705914, 0.41681473861404655, 0.5221637190045453, 0.7647305195423306, 0.9753472709414901, 0.8161040994031895, 0.0, 0.0, nan, 0.10102273963660102, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.27824431754562756, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.8877 | 4.0 | 80 | 3.2859 | 0.1132 | 0.1880 | 0.5045 | [0.13958900136723335, 0.44969956518889154, 0.4770084513190282, 0.8353439211241448, 0.1095341605825186, 0.17624136214635808, 0.33751825907089317, 0.7435181556072182, 0.4413269579617956, 0.0, 0.022456249032058232, nan, 0.1530906233630173, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.41781281449178126, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.19401160134670828, 0.8045108961203948, 0.9188750100566024, 0.9897035649222267, 0.3502914679385268, 0.5895624259334263, 0.6026519694633237, 0.9715237827978112, 0.9021230799334703, 0.0, 0.02254353233830846, nan, 0.19078446306169078, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.418374202216997, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.2562 | 5.0 | 100 | 3.1562 | 0.1131 | 0.1913 | 0.5028 | [0.13037894205033448, 0.4834356789963185, 0.4751899573128162, 0.7748325322635583, 0.1412642564547108, 0.17349191238521242, 0.3597649327494358, 0.7669447541424782, 0.38942214990147345, 0.0, 0.0, nan, 0.2171898871420827, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.38578765211566374, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.17634469282536172, 0.7714883220951823, 0.9415289669079386, 0.9937289468319532, 0.5181582967050095, 0.6260444324437844, 0.6357682715555809, 0.9796758804546092, 0.8101359945210841, 0.0, 0.0, nan, 0.23974540311173975, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.3869107602732057, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.3162 | 6.0 | 120 | 3.0800 | 0.1224 | 0.1966 | 0.5413 | [0.2575755334243388, 0.43287422594667885, 0.5950764259725814, 0.8216711856315689, 0.15240326899990767, 0.1724841670068072, 0.39731244651247144, 0.7701781913142707, 0.37261120352567567, 0.0, 0.0, nan, 0.13170231467916788, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.5472581978506476, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.3676171939482585, 0.7812176292387374, 0.9329776107588778, 0.9913926721222888, 0.41157766763303094, 0.5895324242833356, 0.6673044339101796, 0.9848814367896731, 0.8437432736522845, 0.0, 0.0, nan, 0.1467196170166467, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.5559287873698354, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.7766 | 7.0 | 140 | 2.8844 | 0.1088 | 0.1827 | 0.5185 | [0.19119784006848564, 0.3880060032373116, 0.5818301479410719, 0.8527143738433066, 0.10056249675981128, 0.17833744039158628, 0.16069364920943427, 0.7868340841345615, 0.41851637958452603, 0.0, 0.0, nan, 0.046007078012001844, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.5400733081678428, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2507917861543954, 0.8751607544219682, 0.9728043517189653, 0.98400201916263, 0.24187162941488202, 0.6361699893494143, 0.23295638433567498, 0.9852041532201488, 0.9784463359749536, 0.0, 0.0, nan, 0.048797736916548796, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.5526816705856007, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.0093 | 8.0 | 160 | 2.8210 | 0.1150 | 0.1917 | 0.5039 | [0.12344522325508885, 0.4190219657016113, 0.48700153761982673, 0.8265844404328228, 0.14283563060304585, 0.15878425995925324, 0.3035741578978308, 0.7848042608939411, 0.40621521952191897, 0.0, 0.0, nan, 0.06725462891330136, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.6510024057738573, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.1550307692898302, 0.8093540506205413, 0.9748675772541712, 0.9897326874601587, 0.44353003522553697, 0.5576706718869537, 0.4915662421230986, 0.9624175669987372, 0.9570002935133548, 0.0, 0.0, nan, 0.07093896202807094, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.6817265703728586, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.7142 | 9.0 | 180 | 2.9969 | 0.1213 | 0.2014 | 0.5448 | [0.32157253948768755, 0.4239378720950628, 0.6498940039127582, 0.7934785979347343, 0.1717027417027417, 0.18551384817730157, 0.26104654172706565, 0.686709963413926, 0.3820284875703908, 0.0, 0.0, nan, 0.13747102511944748, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7161244190203789, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.453548545608802, 0.8219564155184208, 0.8250514508757679, 0.9935768624671967, 0.3709280214470526, 0.6939981698993445, 0.38398347516515313, 0.9955030166970675, 0.9703845024948635, 0.0, 0.0, nan, 0.15808943531715808, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.7849624902026648, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.3842 | 10.0 | 200 | 2.7999 | 0.1158 | 0.1957 | 0.5058 | [0.15792858017217096, 0.43614669480408375, 0.47939096432843764, 0.8246707032366246, 0.1638807939737175, 0.17373463923756985, 0.2752884267882461, 0.7090395197598799, 0.39797006957697034, 0.0, 0.0, nan, 0.08088080723144839, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7009632224168126, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.20554384671787843, 0.7472842763607935, 0.9443474108465972, 0.9898588851245312, 0.5574675021041803, 0.6122736750521278, 0.4713004740419213, 0.9943664936158272, 0.9160943156246942, 0.0, 0.0, nan, 0.08372320748558372, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.7170529615944463, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.881 | 11.0 | 220 | 2.8349 | 0.1127 | 0.1890 | 0.5121 | [0.20084870260390272, 0.388384263507593, 0.6356099733841843, 0.7847308108773856, 0.1370602426367218, 0.17902592065001743, 0.1320768749264036, 0.7402269663625819, 0.42589047880386044, 0.0, 0.0, nan, 0.12253919300950912, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.7605416259074294, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2708254154717894, 0.8801524479624744, 0.8961873149260743, 0.9896161973084303, 0.2461735091492877, 0.6240193210626584, 0.2028575780075009, 0.9939315279921426, 0.9338714411505723, 0.0, 0.0, nan, 0.12969209008812968, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.8270070540812899, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.5407 | 12.0 | 240 | 2.7539 | 0.1218 | 0.2070 | 0.5266 | [0.18357910211301118, 0.46364088562680317, 0.5094902434019468, 0.8681265640218238, 0.18243480481343163, 0.15667028743680012, 0.3991855389577638, 0.7923434547204156, 0.35055063871226444, 0.0, 0.008284939992257066, nan, 0.2738033350661815, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0031940501172694727, nan, 0.0, 0.8026869968545076, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.24065884285972045, 0.8197830548226327, 0.9729964003851355, 0.9777439093447753, 0.35633903799993766, 0.5387396306796873, 0.610238543986902, 0.9589167952855339, 0.9579591038058899, 0.0, 0.008317786069651742, nan, 0.3724839516918725, 0.0, nan, nan, 0.0, 0.0, 0.0031940501172694727, nan, 0.0, 0.8429067293696114, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.32 | 13.0 | 260 | 2.7703 | 0.1213 | 0.2069 | 0.5297 | [0.19536897974657705, 0.42577625264523516, 0.5567181658483852, 0.864410192990617, 0.19432419432419432, 0.17911734631982115, 0.34068981529253944, 0.758568283963504, 0.36634701662007924, 0.0, 0.0, nan, 0.2882105631952352, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0026999922857363265, nan, 0.0, 0.8014407192763906, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2571836598371512, 0.8313026482947328, 0.9622961753767657, 0.9861312002692216, 0.3526294460550516, 0.620149108200951, 0.5028461553106022, 0.9839834432440017, 0.9706780158497211, 0.0, 0.0, nan, 0.35801327385485804, 0.0, nan, nan, 0.0, 0.0, 0.0027002839155659793, nan, 0.0, 0.8284066733848393, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.1135 | 14.0 | 280 | 2.6622 | 0.1124 | 0.1949 | 0.5054 | [0.16571107709584867, 0.42375923241526886, 0.4717263585684867, 0.8764458510487221, 0.16591042154566746, 0.16476864320911744, 0.22707632674001027, 0.8075094828827963, 0.34249464336248603, 5.709717939933767e-05, 0.0, nan, 0.15643123247685986, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0021293667448463154, nan, 0.0, 0.8029604728081114, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.20676271752525077, 0.7439030587315548, 0.9862659251165917, 0.9785431612191342, 0.35334642601078586, 0.5708113946267045, 0.3791954613817656, 0.9439034656938403, 0.9946384893846003, 8.60141063134354e-05, 0.0, nan, 0.22157545424872158, 0.0, nan, nan, 0.0, 0.0, 0.0021293667448463154, nan, 0.0, 0.8290784906505431, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0239 | 15.0 | 300 | 2.7246 | 0.1166 | 0.2057 | 0.5092 | [0.1593715892777081, 0.4051560848597029, 0.5242804535828665, 0.8374928088102348, 0.17295706164865154, 0.1765569635080468, 0.2223814156815183, 0.7059651856068551, 0.42098776227416, 0.0, 0.06531416478143472, nan, 0.29960660142007295, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01298760412083327, nan, 0.0, 0.7786718272560603, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2026454295223946, 0.843635297566696, 0.913007144729432, 0.9892278968026689, 0.4524143520683313, 0.6330948202151119, 0.31765568183981574, 0.9918479023432019, 0.9891791409842481, 0.0, 0.06957400497512438, nan, 0.33973452290283973, 0.0, nan, nan, 0.0, 0.0, 0.013208245895568448, nan, 0.0, 0.8560071660508342, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.2951 | 16.0 | 320 | 2.6048 | 0.1215 | 0.2092 | 0.5120 | [0.14996180833660616, 0.42678840628073966, 0.5093740819302307, 0.8636249650919577, 0.18321545157780195, 0.17583989182817322, 0.19998336302571107, 0.8076484672307628, 0.3420567216232289, 0.0009434576094736158, 0.11747829369573425, nan, 0.3543602800763845, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01871461361897475, nan, 0.0, 0.83295952152088, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.18616284113986495, 0.8598612332649272, 0.9809508485177216, 0.9806691064881778, 0.29393060880950156, 0.6086434753911465, 0.2746111523597388, 0.9395047004349656, 0.9960473534879171, 0.0024944090830896267, 0.12095771144278607, nan, 0.6057012294636057, 0.0, nan, nan, 0.0, 0.0, 0.018871127021355388, nan, 0.0, 0.8732504758705633, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.9269 | 17.0 | 340 | 2.6279 | 0.1313 | 0.2172 | 0.5423 | [0.2547838898120707, 0.49042498145038466, 0.6656349431661068, 0.8067647027718731, 0.22976582759763683, 0.2183854729462615, 0.3461867436041465, 0.7686043707941518, 0.3324771791951281, 0.0, 0.10885158618649708, nan, 0.3645145779603969, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00021287919105907396, nan, 0.0, 0.7954523695548109, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.3425698787271534, 0.785431447278413, 0.9370910855680618, 0.9873867052378502, 0.3673431216683812, 0.621499182455035, 0.5162583052525368, 0.9839904588185773, 0.9967126504255943, 0.0, 0.11442786069651742, nan, 0.45365031008595363, 0.0, nan, nan, 0.0, 0.0, 0.00021602271324527836, nan, 0.0, 0.9302989586832382, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.8633 | 18.0 | 360 | 2.5553 | 0.1241 | 0.2080 | 0.5239 | [0.19145117291674843, 0.4167042508716267, 0.5898506559062773, 0.8725142076234337, 0.15604676215935842, 0.18275347310945841, 0.2661777687241122, 0.8112029149758101, 0.3791895442449557, 0.00010535465009086838, 0.06670273612614006, nan, 0.2987848686471121, 0.0, 0.0, 0.0, 0.0, 0.0, 0.003620863507195893, nan, 0.0, 0.8540977581771407, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.25285714834137596, 0.8306654939900322, 0.9810494681030523, 0.9771970527991612, 0.24383553103276287, 0.6115386346249043, 0.37314143202543454, 0.9340044899677283, 0.9912239506897564, 0.0003440564252537416, 0.06708644278606965, nan, 0.5203459906430203, 0.0, nan, nan, 0.0, 0.0, 0.003656955931366498, nan, 0.0, 0.9107602732056881, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0786 | 19.0 | 380 | 2.5326 | 0.1302 | 0.2160 | 0.5335 | [0.22022741800589665, 0.45768263377289087, 0.6088333495788019, 0.8618503606538719, 0.22605822987003557, 0.19417693169092945, 0.19189359013280052, 0.778963732320985, 0.3992842697339316, 0.0012784454103809768, 0.09816471314003701, nan, 0.44293109144975956, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00085976602081862, nan, 0.0, 0.8549036221139589, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.3002318733756387, 0.8613700772012117, 0.9564153337883675, 0.9801125424299199, 0.38822905950933634, 0.6242743350884299, 0.258724084756411, 0.970239932650484, 0.9890030329713335, 0.00430070531567177, 0.09895833333333333, nan, 0.6563486018931564, 0.0, nan, nan, 0.0, 0.0, 0.0008640908529811135, nan, 0.0, 0.9038181614600829, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0501 | 20.0 | 400 | 2.5106 | 0.1301 | 0.2148 | 0.5166 | [0.16925647551849732, 0.42199517276740495, 0.49332017847566784, 0.8757217582290553, 0.232161713682305, 0.1855716918802924, 0.21257345167555872, 0.814565822988684, 0.3612282296684968, 0.0028069445888345983, 0.11552068912475004, nan, 0.4465860597439545, 0.0, 0.0, 0.0, 0.0, 0.0, 0.026407133666477154, nan, 0.0, 0.8451555740866211, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21737937017970185, 0.7914160070360599, 0.9836265535828755, 0.9805429088238055, 0.33582717665762646, 0.6252643895414248, 0.3292021246216232, 0.9579977550161358, 0.9874278446335975, 0.004644761740925512, 0.11675995024875623, nan, 0.6831683168316832, 0.0, nan, nan, 0.0, 0.0, 0.027234292062708308, nan, 0.0, 0.9078490650543052, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.645 | 21.0 | 420 | 2.5014 | 0.1282 | 0.2153 | 0.5255 | [0.21214499175445686, 0.4258537141054513, 0.6590263642023237, 0.8645674339958805, 0.1698871682426568, 0.199620804184729, 0.263562946532518, 0.7853834183765788, 0.33610787249264684, 0.0013363158353426488, 0.13049076037564375, nan, 0.3396636666391435, 0.0, 0.0, 0.0, 0.0, 0.0, 0.008831624474488098, nan, 0.0, 0.8601172115977791, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2760636807211108, 0.7598631877259846, 0.9780337849937325, 0.9860923702186456, 0.23186508307615575, 0.6617613968768282, 0.40944657033525617, 0.9135330433562509, 0.9928089228059876, 0.0039566488904180285, 0.13393967661691542, nan, 0.6713632901751714, 0.0, nan, nan, 0.0, 0.0, 0.009011233181088754, nan, 0.0, 0.9366812227074236, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0367 | 22.0 | 440 | 2.4774 | 0.1199 | 0.2061 | 0.5207 | [0.17487617364542382, 0.478632902601751, 0.4920520352064927, 0.8868312757201646, 0.19538618971347216, 0.1723633495732258, 0.32337074094104923, 0.8096707945448062, 0.35712472410072704, 0.003322141721152147, 0.055735930735930736, 0.0, 0.23693584858904068, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005500487022288432, nan, 0.0, 0.8423613363606877, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2233719916924837, 0.8297781686699892, 0.9925386497940667, 0.9678843123359835, 0.2737928239658343, 0.6097835380945952, 0.45531821729777067, 0.9238108601094429, 0.9909695724488797, 0.008085325993462927, 0.05604788557213931, nan, 0.41611358938091614, 0.0, nan, nan, 0.0, 0.0, 0.005925194420441921, nan, 0.0, 0.8723547195162916, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.5569 | 23.0 | 460 | 2.5136 | 0.1294 | 0.2260 | 0.5306 | [0.24358070445878413, 0.4387693279729268, 0.6458843598239097, 0.8440154376017454, 0.25469779369500817, 0.19653072278810457, 0.36751591148977053, 0.7673311939511166, 0.3255732612839256, 0.0026392450763140203, 0.1534201251528447, 0.0, 0.47508613344200346, 0.0, 0.0, 0.0, 0.0, 0.0, 0.028900636719434547, nan, 0.0, 0.6895287958115183, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.3309071086081071, 0.7926473175021987, 0.8342594058429509, 0.9864612556991189, 0.35914461173976747, 0.6694568201251069, 0.5535058160565043, 0.9796688648800337, 0.9926719499070541, 0.009117495269224153, 0.1658115671641791, nan, 0.6976389946686976, 0.0, nan, nan, 0.0, 0.0, 0.03053635353660042, nan, 0.0, 0.9585152838427947, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0129 | 24.0 | 480 | 2.4220 | 0.1231 | 0.2080 | 0.5199 | [0.1775199851273471, 0.46836633835055835, 0.4896399345125027, 0.8791340599962735, 0.18593814432989691, 0.17183267040563927, 0.28171753869039107, 0.805754360501489, 0.3438605606697135, 0.003286127846021438, 0.09345072598084646, 0.0, 0.2416756756756757, 0.0, 0.0, 0.0, 0.0, 0.0, 0.033464884611434695, nan, 0.0, 0.8720025408924885, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.22910932215427257, 0.8446320727059513, 0.9880670301749979, 0.977132336048201, 0.28111848873094547, 0.5689812939711684, 0.3749690635292326, 0.9035709274589588, 0.9841894139516681, 0.007225184930328574, 0.09406094527363185, nan, 0.48645413991948644, 0.0, nan, nan, 0.0, 0.0, 0.035196272065177135, nan, 0.0, 0.9222371514947935, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.828 | 25.0 | 500 | 2.4293 | 0.1307 | 0.2273 | 0.5351 | [0.23174784643198978, 0.4409356701875996, 0.6697353737488182, 0.8815332539517684, 0.22883603830418559, 0.2056038223485318, 0.31408247681123086, 0.7892248403173433, 0.30497623144536234, 0.00853343016658345, 0.13191457808097287, 0.0, 0.4582582582582583, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09687373122208688, nan, 0.0, 0.7262600361511623, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.30116090246346267, 0.8662953190657676, 0.9173879305199069, 0.9750096266167053, 0.3173415630163035, 0.6468055743065869, 0.38163230338682963, 0.9310018240493897, 0.9936209764210938, 0.03234130397385171, 0.13829291044776118, nan, 0.8301599390708302, 0.0, nan, nan, 0.0, 0.0, 0.11044932724355018, nan, 0.0, 0.9672489082969432, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2328 | 26.0 | 520 | 2.4304 | 0.1288 | 0.2155 | 0.5281 | [0.18864537242560092, 0.44074016534894167, 0.6373951553378514, 0.8839875027843234, 0.2365548088949187, 0.20737611493940375, 0.3448321825597825, 0.8122199592668025, 0.3646749222644132, 0.0024461347719733273, 0.11858529819694869, 0.0, 0.2802300461259999, 0.0006623127289799546, 0.0, 0.0, 0.0, 0.0024468285337850555, 0.009989053092501369, nan, 0.0, 0.8796660640221206, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2482907016472991, 0.8206234730772989, 0.9861024242251225, 0.9759706703684649, 0.3478599706973409, 0.594992724599853, 0.4925086146172445, 0.9232776764417006, 0.9856569807259563, 0.006279029760880785, 0.11963619402985075, nan, 0.5221956261560222, 0.0006623127289799546, nan, nan, 0.0, 0.014806378132118452, 0.010137637328724849, nan, 0.0, 0.9261560855447318, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4781 | 27.0 | 540 | 2.4422 | 0.1299 | 0.2209 | 0.5219 | [0.17659158243016285, 0.44569221165907835, 0.7025365089218796, 0.8805066143250369, 0.23730241121971285, 0.2012749564041425, 0.19055360454563391, 0.8090528451965804, 0.303646335601408, 0.0067779632721202, 0.2036432251977647, 0.0, 0.4443127962085308, 0.008524377894719368, 0.0, 0.0, 0.0, 0.0, 0.04045107411680677, nan, 0.0, 0.8049152946790742, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21555010422785015, 0.8735776409655037, 0.9633446572839647, 0.98126773643456, 0.3776613984226441, 0.6787123291781048, 0.22984369943076893, 0.9733267854637295, 0.9950396243029058, 0.017460863581627385, 0.2181281094527363, nan, 0.6528125340006529, 0.008610065476739408, nan, nan, 0.0, 0.0, 0.045441920750524625, nan, 0.0, 0.9442951517187325, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2081 | 28.0 | 560 | 2.3810 | 0.1261 | 0.2128 | 0.5211 | [0.15442323192338692, 0.4524473693434028, 0.5972118676123673, 0.8827326503943622, 0.1936700111015689, 0.1902729681100191, 0.41074565302421506, 0.7918516908362575, 0.3037401709809488, 0.004188319243000078, 0.08314034275127374, 0.0, 0.2056720749241525, 0.002098820145328662, 0.0, 0.0, 0.0, 0.0, 0.023980428923004806, nan, 0.0, 0.8754738244032374, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.18645268285941333, 0.857816866998925, 0.9874960746809787, 0.9792582813172448, 0.28822594220518094, 0.6313547245098481, 0.5497839206503322, 0.8944857583836117, 0.9965952450836513, 0.009289523481851024, 0.08372201492537314, nan, 0.4241105429224241, 0.002104310062961628, nan, nan, 0.0, 0.0, 0.025259227255894336, nan, 0.0, 0.9568357406785354, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2767 | 29.0 | 580 | 2.3840 | 0.1317 | 0.2256 | 0.5317 | [0.19944340090647017, 0.4583429792538509, 0.6103020536097107, 0.8594584528918344, 0.22537235954158846, 0.2035824803618186, 0.34345250687539663, 0.8086612964187848, 0.3102282874025946, 0.0005179901980316373, 0.1641824922448362, 0.0, 0.4488895116653822, 0.003282642192001072, 0.0, 0.0, 0.0, 0.0, 0.0435928833825542, nan, 0.0, 0.8541082164328657, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.24925812036685133, 0.8048705169549497, 0.964164756993556, 0.9856911263626921, 0.35188129305776367, 0.6671316922430733, 0.49452662440268813, 0.9606356110565455, 0.995969083259955, 0.0011181833820746603, 0.16868781094527363, nan, 0.6970949842236971, 0.003286412528609395, nan, nan, 0.0, 0.0, 0.046691766448586594, nan, 0.0, 0.9544283954764304, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0403 | 30.0 | 600 | 2.3747 | 0.1257 | 0.2133 | 0.5167 | [0.16352896522275467, 0.44764101913051757, 0.6004680867192238, 0.8813352478759936, 0.20915147650563998, 0.20128041942489958, 0.21751251881662623, 0.7871564469071027, 0.3088048194673444, 0.0002105263157894737, 0.13503761707354522, 0.0, 0.30282679993752926, 0.0051150254916849545, 0.0, 0.0, 0.0, 0.0, 0.015606902941857771, nan, 0.0, 0.8779039752194114, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.19847631551670525, 0.8820990911756084, 0.9867823803134546, 0.9791482628406124, 0.2965179712584557, 0.6381050957802679, 0.2695851657242942, 0.9009611337168514, 0.9950004891889248, 0.000430070531567177, 0.13673818407960198, nan, 0.6329017517136329, 0.005139211428667243, nan, nan, 0.0, 0.0, 0.016633748919886435, nan, 0.0, 0.9520210502743254, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.1765 | 31.0 | 620 | 2.3353 | 0.1399 | 0.2309 | 0.5290 | [0.17834652809044974, 0.44707050842744056, 0.6672614974450122, 0.8865733647613435, 0.20102622576966933, 0.26035059498784435, 0.25906992863781725, 0.8073731144605635, 0.33563509367192623, 0.020610206102061022, 0.23811301037545718, 0.0, 0.5251871675869779, 0.026663142418185123, 0.0, 0.0, 0.0, 0.0, 0.06917558942982224, nan, 0.0, 0.8142231019369434, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21897253989642476, 0.8797498289846575, 0.9743044075168886, 0.9736732257093765, 0.38470650581377225, 0.640970253363935, 0.3130866030803206, 0.961659884944577, 0.9910380588983465, 0.03939446069155342, 0.24797885572139303, nan, 0.8433793928843434, 0.027062601128446752, nan, nan, 0.0, 0.0, 0.08094679669176645, nan, 0.0, 0.9672489082969432, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 0.976 | 32.0 | 640 | 2.3011 | 0.1341 | 0.2232 | 0.5324 | [0.194431455309514, 0.46955498610777546, 0.613067754057521, 0.882022225919731, 0.2237273962579701, 0.20414809478786788, 0.37648078590002887, 0.7976578556836739, 0.32327383300041984, 0.005091173567458049, 0.16303353658536585, 0.0, 0.30270124983201185, 0.016933489847398786, 0.0, 0.0, 0.0, 0.0011189095717627912, 0.0723300237575358, nan, 0.0, 0.8534193929838392, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.24262823096738034, 0.8330225740252125, 0.9848281553725614, 0.9777309659945832, 0.3336138907073163, 0.6121686692768102, 0.520932092066937, 0.9012347411252981, 0.9943841111437237, 0.00842938241871667, 0.16627798507462688, nan, 0.6126645631596127, 0.017052456844876298, nan, nan, 0.0, 0.003132118451025057, 0.08127083076163437, nan, 0.0, 0.9697122382711902, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.968 | 33.0 | 660 | 2.3002 | 0.1333 | 0.2195 | 0.5284 | [0.18928064973572256, 0.4570936992169421, 0.5611925805351944, 0.8887112833982858, 0.24282650654085353, 0.211432631032033, 0.27396406307297394, 0.8227039195396039, 0.3328624618167993, 0.01469700020132877, 0.17693715680292862, 0.0, 0.37943262411347517, 0.015965952626844786, 0.0, 0.0, 0.0, 0.002482128673550437, 0.02825462854464495, nan, 0.0, 0.8680091766505226, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.23673542249709198, 0.8609049154695593, 0.9869147381779773, 0.9732978685538072, 0.3529723495121419, 0.5982329028096546, 0.3555790355436252, 0.9537743791216501, 0.9936307601995891, 0.02511611904352314, 0.18034825870646767, nan, 0.5995539114350995, 0.016071563309551555, nan, nan, 0.0, 0.0071184510250569474, 0.02976484384643871, nan, 0.0, 0.9531967304893069, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.6715 | 34.0 | 680 | 2.3428 | 0.1319 | 0.2235 | 0.5275 | [0.192535476735673, 0.4475758030621435, 0.6196268863341836, 0.8762169279394266, 0.22313994090787, 0.23991485126617274, 0.29249404994493977, 0.8012692255760001, 0.31269688915140775, 0.008407482659567015, 0.16542907448518718, 0.0, 0.30225364560318163, 0.025684703848966296, 0.0, 0.0, 0.0, 0.013730446068720542, 0.01877666725115436, nan, 0.0, 0.8700765448370253, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.23725176304383713, 0.8391908531222515, 0.9815399707774597, 0.9840829151013303, 0.31076405124848033, 0.650900799543975, 0.39189369276752906, 0.9159113231373649, 0.9944623813716857, 0.020643385515224495, 0.16798818407960198, nan, 0.5953650310085954, 0.025914033484519487, nan, nan, 0.0, 0.17169703872437359, 0.019827799037155907, nan, 0.0, 0.9609226290448998, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.1251 | 35.0 | 700 | 2.3166 | 0.1320 | 0.2178 | 0.5272 | [0.1738329573074366, 0.4481836429707564, 0.646818738533712, 0.8887148166253409, 0.19203474618994068, 0.20198929222920364, 0.34707006701027093, 0.8137490007993605, 0.3359391814259651, 0.021759117376647256, 0.1565470714504753, 0.0, 0.2550185435921587, 0.030211480362537766, 0.0, 0.0, 0.0, 0.007979502196193264, 0.02663026387260928, nan, 0.0, 0.8660260602706645, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21429668275192237, 0.8745314179614971, 0.9854510159114915, 0.9765369419393669, 0.30873780354749214, 0.6129187105290791, 0.4442096445637482, 0.928441139329311, 0.9926425985715683, 0.036642009289523485, 0.15873756218905472, nan, 0.4713306495484713, 0.030432850711357404, nan, nan, 0.0, 0.031036446469248292, 0.028746451055425257, nan, 0.0, 0.9637218676519986, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4993 | 36.0 | 720 | 2.2998 | 0.1370 | 0.2256 | 0.5340 | [0.20034650494378936, 0.474809695065904, 0.6596017400081378, 0.8864627409496943, 0.24084149437794705, 0.2090614072601947, 0.3799128500039097, 0.8083854019616268, 0.31247986895263336, 0.03113526215303275, 0.15725529151065942, 0.0, 0.3212624773139746, 0.026428974725165234, 0.0, 0.0, 0.0, 0.007238181406921511, 0.04831199068684517, nan, 0.0, 0.8517368810051736, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.24926771777478338, 0.8319085312225154, 0.9802657019248986, 0.9761907073217296, 0.35188129305776367, 0.6233142822855257, 0.5087383631275344, 0.924000280622983, 0.996624596419137, 0.05470497161534492, 0.1599813432835821, nan, 0.6163094331411163, 0.02668533438409108, nan, nan, 0.0, 0.02733485193621868, 0.052508949512405874, nan, 0.0, 0.967752771246221, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2429 | 37.0 | 740 | 2.3465 | 0.1253 | 0.2085 | 0.5136 | [0.15281192937818242, 0.42200137080191913, 0.623907888729428, 0.8850175339830876, 0.1902573939420975, 0.1966555884621611, 0.22019852343777596, 0.8171784140153339, 0.3342301304083277, 0.01515798462852263, 0.13233036536690207, 0.0, 0.24694015622116983, 0.018732205660693352, 0.0, 0.0, 4.903162539838196e-05, 0.0030627871362940277, 0.010836686147249714, nan, 0.0, 0.8669134049888461, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.18537393420784914, 0.8664204045734388, 0.9894710616398361, 0.9766954979792194, 0.26590604445275723, 0.6195640760241813, 0.2966855141165496, 0.9287077311631823, 0.9942373544662949, 0.024428006193015654, 0.13401741293532338, nan, 0.43684038733543684, 0.01875434904719188, nan, nan, 4.903162539838196e-05, 0.007403189066059226, 0.011387483026786816, nan, 0.0, 0.9572836188556713, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.216 | 38.0 | 760 | 2.2986 | 0.1350 | 0.2206 | 0.5278 | [0.18120567211036784, 0.47554281720915215, 0.6990705073134502, 0.8850704324614802, 0.22198798355644567, 0.20541313461097366, 0.35706786570222504, 0.8027076140287116, 0.31816065955831585, 0.013800683927699071, 0.14268218003674218, 0.0, 0.3186851922568516, 0.019285003317850033, 0.0, 0.0, 0.0, 0.0013712718546451835, 0.03893587494435862, nan, 0.0, 0.8547697204814483, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.22446033775197993, 0.8405355223297176, 0.9819941399204296, 0.9752846728082863, 0.32825212756008604, 0.6313997269849841, 0.46942524796771185, 0.9296969271783359, 0.9963897857352509, 0.0194391880268364, 0.1449004975124378, nan, 0.5901969317810902, 0.019492115125042966, nan, nan, 0.0, 0.003416856492027335, 0.04588939637081842, nan, 0.0, 0.9621542940320232, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.22 | 39.0 | 780 | 2.3125 | 0.1310 | 0.2193 | 0.5206 | [0.1709238047105282, 0.45177891820322713, 0.7082365502772366, 0.8788844022695806, 0.22740745283211708, 0.21889929002882666, 0.2659475082474528, 0.7970895449195763, 0.32059459357254055, 0.01463902527732315, 0.13316968395213255, 0.0, 0.25879383197058664, 0.023727994534563056, 0.0, 0.0, 0.0009151971759629999, 0.02631446793551951, 0.024663803819640638, nan, 0.0, 0.8502781743882625, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.20903154476039112, 0.8496198573243429, 0.9812259452557491, 0.9839049440361896, 0.34680008728451633, 0.6447354604503248, 0.34685019133017303, 0.9122632243580749, 0.9953331376577634, 0.0276965422329262, 0.13495024875621892, nan, 0.5131106517245131, 0.023876793064998868, nan, nan, 0.0009152570074364632, 0.1492027334851936, 0.026941118380446858, nan, 0.0, 0.9668570148919494, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.1834 | 40.0 | 800 | 2.2889 | 0.1301 | 0.2141 | 0.5265 | [0.1929371903937709, 0.4571880369812037, 0.6853469973207278, 0.8876211182502998, 0.19861466088824684, 0.2055685096628858, 0.2848686407856681, 0.8093772785130809, 0.34139712700055097, 0.00878581173260573, 0.09504706063879031, 0.0, 0.24800249334164448, 0.013450370469261064, 0.0, 0.0, 0.000441277416402445, 0.007058138688787019, 0.03166174466151749, nan, 0.0, 0.868396993092239, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.24199864100703683, 0.8476184892016027, 0.9858247322348496, 0.9749416740281971, 0.3342996976214969, 0.6242143317882484, 0.3815847088164182, 0.918956082503157, 0.9942569220232854, 0.0138482711164631, 0.09577114427860696, nan, 0.47617234250897617, 0.013514533153363123, nan, nan, 0.0004412846285854376, 0.027619589977220957, 0.0344093321812122, nan, 0.0, 0.9571716493113873, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.5118 | 41.0 | 820 | 2.2925 | 0.1322 | 0.2160 | 0.5278 | [0.20140379663179964, 0.46820203275875655, 0.6856134650963405, 0.887481830270401, 0.22260076987743904, 0.22177123388218797, 0.30176087068124646, 0.8122131818520081, 0.31711941125562687, 0.01762114537444934, 0.09896557048016057, 0.0, 0.26787909982556973, 0.012025717045870948, 0.0, 0.0, 0.0, 0.01032133209692376, 0.026776576803914147, nan, 0.0, 0.8698927993527508, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2544599154660309, 0.8362317990814033, 0.9857987797123942, 0.9739903377890816, 0.3136631441129711, 0.6073384036121987, 0.39166523882955434, 0.9188368177353725, 0.9966441639761275, 0.027180457595045587, 0.09965796019900497, nan, 0.5096289848765096, 0.012106070641101955, nan, nan, 0.0, 0.07289293849658314, 0.02947167016417726, nan, 0.0, 0.963106035158437, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4791 | 42.0 | 840 | 2.3143 | 0.1337 | 0.2198 | 0.5225 | [0.17082896778818507, 0.45902372255040724, 0.6914575631960871, 0.8839466838151472, 0.20380808221633573, 0.21829002244078705, 0.2823923722326677, 0.8160080452180024, 0.3103076684231194, 0.019091109154929578, 0.1226762943616531, 0.0, 0.3667371847030106, 0.03827691592200113, 0.0, 0.0, 0.0010620567955295578, 0.015357821405953135, 0.015238353931821269, nan, 0.0, 0.867283166505767, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.20812171048843128, 0.8468249780123132, 0.9839872936450058, 0.9785399253815862, 0.31899373421864774, 0.6289095900274515, 0.3617092162126145, 0.9449698330293251, 0.9973290284707954, 0.029846894890762084, 0.12414490049751244, nan, 0.612990969426613, 0.03864049832745076, nan, nan, 0.001062351883631609, 0.08285876993166287, 0.01731267744722874, nan, 0.0, 0.9556040756914119, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.5573 | 43.0 | 860 | 2.3130 | 0.1320 | 0.2175 | 0.5219 | [0.1648530644139321, 0.44917533040001445, 0.6925549341301199, 0.8862138131656735, 0.18958373175240645, 0.21231384036197637, 0.27370624724795095, 0.8200861585023553, 0.32058106499184214, 0.02355267889248472, 0.11618959679901508, 0.0, 0.33892986542443065, 0.03070511114068947, 0.0, 0.0, 0.00024515812699190976, 0.017795816422104278, 0.017780519569538692, nan, 0.0, 0.8588767845595184, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.2012595638170043, 0.8743086094009577, 0.9853134675424778, 0.9778895220344358, 0.2781258767417937, 0.6327948037142043, 0.33726464484931556, 0.9428862073803844, 0.9957929752470404, 0.03380354378118011, 0.1173818407960199, nan, 0.56995974322707, 0.030944256742595093, nan, nan, 0.00024515812699190976, 0.0811503416856492, 0.020012961362794717, nan, 0.0, 0.9666330758033814, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.7719 | 44.0 | 880 | 2.2998 | 0.1333 | 0.2199 | 0.5257 | [0.18463386068417822, 0.4676365136979615, 0.6840028242176664, 0.8843575402661116, 0.2129184549356223, 0.20642904433925907, 0.3201266369669455, 0.8109078276980083, 0.31568594652194387, 0.007547939616483068, 0.12530731407498463, 0.0, 0.3065877617230917, 0.034201156913679635, 0.0, 0.0, 0.001405205797290894, 0.023802027580127196, 0.02196289166934018, nan, 0.0, 0.85741284129905, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.22634334918825125, 0.8239030587315548, 0.9855652070102954, 0.978785849035235, 0.3092989182954581, 0.6267344703958718, 0.4302453976050412, 0.933471306299986, 0.9970746502299188, 0.00954756580079133, 0.1267879353233831, nan, 0.5552170601675552, 0.034549250077549276, nan, nan, 0.0014055732614202828, 0.10763097949886105, 0.02535180841871374, nan, 0.0, 0.9651774717276901, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.0403 | 45.0 | 900 | 2.2880 | 0.1327 | 0.2202 | 0.5235 | [0.17035715701431006, 0.4626951528852379, 0.6793273578739064, 0.8840633094198196, 0.21937922061830123, 0.20780566312930646, 0.30671219310304054, 0.8117275272313842, 0.31566770320998055, 0.008071316708830262, 0.13461833819689756, 0.0, 0.3026885361759769, 0.0444647378689137, 0.0, 0.0, 0.003391433369747762, 0.02177376526818906, 0.020826482516715993, nan, 0.0, 0.8486096807415036, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.20622910164422792, 0.8520590247239324, 0.9844648200581856, 0.9785431612191342, 0.33115122042457684, 0.6198040892249074, 0.39503493441468196, 0.931647256910341, 0.9971529204578808, 0.011525890246000344, 0.13627176616915423, nan, 0.5591883364160591, 0.044903126283754896, nan, nan, 0.003399526027621149, 0.10506833712984055, 0.023453894580915936, nan, 0.0, 0.9687604971447766, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.5917 | 46.0 | 920 | 2.2892 | 0.1340 | 0.2216 | 0.5260 | [0.1736793975508642, 0.46015023493743684, 0.6908199361013053, 0.887111591144557, 0.21875451524346193, 0.2048902470179024, 0.3203704812066076, 0.8089483157624068, 0.3222859315107549, 0.02411484339536995, 0.1529799295235177, 0.0, 0.2829693559203429, 0.043404411213708236, 0.0, 0.0, 0.004164829242001078, 0.02610441767068273, 0.02117460182388064, nan, 0.0, 0.8539790344012993, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21253651813718152, 0.8632189973614776, 0.9842649856352789, 0.9780448422367404, 0.3303718943857352, 0.6256094085174685, 0.4076189388314581, 0.926981899817595, 0.9953624889932492, 0.036555995183210045, 0.15523942786069653, nan, 0.527962136873028, 0.04370425640724687, nan, nan, 0.004167688158862467, 0.11474943052391799, 0.023037279348228613, nan, 0.0, 0.9714477662075915, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.2285 | 47.0 | 940 | 2.2962 | 0.1342 | 0.2221 | 0.5275 | [0.19599716234109965, 0.4721781567285578, 0.6767966078463135, 0.8855580798827409, 0.24860719808502976, 0.21086919429809367, 0.2943940094830577, 0.8014180926103793, 0.32251935591127157, 0.027421516079632467, 0.144272113022113, 0.0, 0.2586192267443505, 0.05425465450456079, 0.0, 0.0, 0.004572324373754858, 0.023474947765168584, 0.020446228980678598, nan, 0.0, 0.8607462686567164, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.24235374510052324, 0.8225857519788918, 0.9867200942595615, 0.9775012215286744, 0.3658468156738053, 0.6117936486506758, 0.3877053705713252, 0.9190332538234881, 0.9964778397417082, 0.049286082917598485, 0.14606654228855723, nan, 0.49744315090849744, 0.05460307346641068, nan, nan, 0.004576285037182316, 0.16315489749430523, 0.022157758301444265, nan, 0.0, 0.9685925428283507, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.7362 | 48.0 | 960 | 2.2867 | 0.1335 | 0.2219 | 0.5251 | [0.1810132624505115, 0.46289090955837614, 0.7009337664380836, 0.8902938421018519, 0.22985068340248088, 0.21333525245556653, 0.297397221935807, 0.801989891449341, 0.3137530257395739, 0.02959547020022182, 0.1363531669865643, 0.0, 0.27724525650035137, 0.0350093358228861, 0.0, 0.0, 0.0017645040599931381, 0.027893760148457434, 0.027733384767992947, nan, 0.0, 0.8473915163334959, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.22273664328738094, 0.8445069871982801, 0.9828324063957397, 0.9778571636589557, 0.34022257551669316, 0.6447804629254609, 0.3843737506425267, 0.9195032973200505, 0.996771353096566, 0.04360915190091175, 0.13805970149253732, nan, 0.5365575019040365, 0.03521156280652923, nan, nan, 0.0017651385143417505, 0.13695899772209566, 0.03106098012591038, nan, 0.0, 0.9730153398275669, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8843 | 49.0 | 980 | 2.2918 | 0.1320 | 0.2172 | 0.5234 | [0.1767027937577472, 0.46560560158548614, 0.6625505590001549, 0.8845600368113623, 0.2269413833026028, 0.20745588500993717, 0.2993936587412992, 0.8051597903949266, 0.3106521858373101, 0.008812729498164015, 0.1330620393120393, 0.0, 0.27645742622157715, 0.05612690283585936, 0.0, 0.0, 0.003634230211372044, 0.010895367946923125, 0.028342906221482635, nan, 0.0, 0.8566964285714286, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21343099655645004, 0.8338512655135346, 0.987550574978135, 0.9766113662029712, 0.32996664484553756, 0.6200741040757242, 0.3938736268966436, 0.928125438473411, 0.9973877311417669, 0.012386031309134697, 0.13471703980099503, nan, 0.4973887498639974, 0.056598395358780675, nan, nan, 0.003644684154613059, 0.05096810933940774, 0.032588569312430565, nan, 0.0, 0.9669129996640914, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.5944 | 50.0 | 1000 | 2.2863 | 0.1322 | 0.2193 | 0.5236 | [0.17584122398758895, 0.46366573409354317, 0.6899099859557716, 0.8850136717893229, 0.2200496783644354, 0.21159926360234527, 0.28788921058002676, 0.8057838286364055, 0.3129988790098432, 0.01191510487774606, 0.13235068324888685, 0.0, 0.2795662866085401, 0.0396786413020855, 0.0, 0.0, 0.0023832843617368595, 0.021505376344086023, 0.030850694907567956, nan, 0.0, 0.8503331373701744, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.21451742313435987, 0.8538336753640183, 0.984218271094859, 0.9792485738046007, 0.3231085757037314, 0.6344898969443319, 0.36251832390960836, 0.9197207801318928, 0.9971040015654046, 0.0165147084121796, 0.13401741293532338, nan, 0.5442280491785443, 0.03995674007998055, nan, nan, 0.002386205769387922, 0.10079726651480637, 0.03569003826688063, nan, 0.0, 0.9717276900683014, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
khizarAI/AST-finetuned-gtzan | khizarAI | 2024-11-25T13:59:39Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-11-25T13:59:00Z | ---
library_name: transformers
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: AST-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.93
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AST-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3724
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7384 | 1.0 | 113 | 0.5599 | 0.82 |
| 0.5848 | 2.0 | 226 | 0.9999 | 0.76 |
| 0.4541 | 3.0 | 339 | 0.6138 | 0.86 |
| 0.0368 | 4.0 | 452 | 0.6328 | 0.85 |
| 0.0444 | 5.0 | 565 | 0.5640 | 0.88 |
| 0.0395 | 6.0 | 678 | 0.4339 | 0.91 |
| 0.0003 | 7.0 | 791 | 0.3463 | 0.92 |
| 0.0002 | 8.0 | 904 | 0.3744 | 0.93 |
| 0.0001 | 9.0 | 1017 | 0.3729 | 0.93 |
| 0.0001 | 10.0 | 1130 | 0.3724 | 0.93 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/Fimbulvetr-Underground-V.02-test-GGUF | mradermacher | 2024-11-25T13:57:40Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TheHierophant/Fimbulvetr-Underground-V.02-test",
"base_model:quantized:TheHierophant/Fimbulvetr-Underground-V.02-test",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T13:03:46Z | ---
base_model: TheHierophant/Fimbulvetr-Underground-V.02-test
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/TheHierophant/Fimbulvetr-Underground-V.02-test
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q2_K.gguf) | Q2_K | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q3_K_S.gguf) | Q3_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q3_K_M.gguf) | Q3_K_M | 5.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q3_K_L.gguf) | Q3_K_L | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.IQ4_XS.gguf) | IQ4_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q4_K_S.gguf) | Q4_K_S | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q4_K_M.gguf) | Q4_K_M | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q5_K_S.gguf) | Q5_K_S | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q5_K_M.gguf) | Q5_K_M | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q6_K.gguf) | Q6_K | 8.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.Q8_0.gguf) | Q8_0 | 11.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Underground-V.02-test-GGUF/resolve/main/Fimbulvetr-Underground-V.02-test.f16.gguf) | f16 | 21.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlx-community/Llama-3.1-Tulu-3-8B-8bit | mlx-community | 2024-11-25T13:53:17Z | 87 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"dataset:allenai/RLVR-GSM-MATH-IF-Mixed-Constraints",
"base_model:allenai/Llama-3.1-Tulu-3-8B",
"base_model:quantized:allenai/Llama-3.1-Tulu-3-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2024-11-25T13:29:56Z | ---
license: llama3.1
language:
- en
pipeline_tag: text-generation
datasets:
- allenai/RLVR-GSM-MATH-IF-Mixed-Constraints
base_model: allenai/Llama-3.1-Tulu-3-8B
library_name: transformers
tags:
- mlx
---
# mlx-community/Llama-3.1-Tulu-3-8B-8bit
The Model [mlx-community/Llama-3.1-Tulu-3-8B-8bit](https://huggingface.co/mlx-community/Llama-3.1-Tulu-3-8B-8bit) was
converted to MLX format from [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)
using mlx-lm version **0.20.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.1-Tulu-3-8B-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
haejiness/tmp-ner | haejiness | 2024-11-25T13:48:14Z | 179 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T13:47:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hi-Q/krx_gemma_9B_it_1125-3 | Hi-Q | 2024-11-25T13:48:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"krx",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T12:49:34Z | ---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LHRuig/olivrarw | LHRuig | 2024-11-25T13:46:17Z | 26 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-11-25T13:46:11Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/1000237109.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: oliv
---
# olivrarrw
<Gallery />
## Model description
Oliv arrw lora
## Trigger words
You should use `oliv` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/olivrarw/tree/main) them in the Files & versions tab.
|
Skyler215/GPT2_VIT | Skyler215 | 2024-11-25T13:45:25Z | 53 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-11-25T13:13:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated | huihui-ai | 2024-11-25T13:44:17Z | 275 | 4 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-13T03:59:33Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterate/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-14B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-Code-14B-Instruct-abliterated
This is an uncensored version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
[0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
[1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
[3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
[7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
[14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
[32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
If the desired result is not achieved, you can clear the conversation and try again.
## ollama
You can use [huihui_ai/qwen2.5-coder-abliterate:14b](https://ollama.com/huihui_ai/qwen2.5-coder-abliterate:14b) directly,
```
ollama run huihui_ai/qwen2.5-coder-abliterate:14b
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-Code-14B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k3_task5_organization_fold1 | MayBashendy | 2024-11-25T13:43:37Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:41:11Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k3_task5_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k3_task5_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7544
- Qwk: 0.6603
- Mse: 0.7544
- Rmse: 0.8685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0870 | 2 | 4.8614 | 0.0155 | 4.8614 | 2.2049 |
| No log | 0.1739 | 4 | 2.6330 | -0.2963 | 2.6330 | 1.6226 |
| No log | 0.2609 | 6 | 1.3248 | 0.0813 | 1.3248 | 1.1510 |
| No log | 0.3478 | 8 | 1.1055 | 0.2657 | 1.1055 | 1.0514 |
| No log | 0.4348 | 10 | 0.9917 | 0.2780 | 0.9917 | 0.9959 |
| No log | 0.5217 | 12 | 1.0568 | 0.0 | 1.0568 | 1.0280 |
| No log | 0.6087 | 14 | 1.0919 | 0.0 | 1.0919 | 1.0449 |
| No log | 0.6957 | 16 | 1.0621 | 0.0 | 1.0621 | 1.0306 |
| No log | 0.7826 | 18 | 1.0337 | 0.1571 | 1.0337 | 1.0167 |
| No log | 0.8696 | 20 | 1.1545 | 0.2933 | 1.1545 | 1.0745 |
| No log | 0.9565 | 22 | 1.3688 | 0.25 | 1.3688 | 1.1699 |
| No log | 1.0435 | 24 | 1.3366 | 0.25 | 1.3366 | 1.1561 |
| No log | 1.1304 | 26 | 1.2003 | 0.3354 | 1.2003 | 1.0956 |
| No log | 1.2174 | 28 | 0.9418 | 0.3443 | 0.9418 | 0.9705 |
| No log | 1.3043 | 30 | 0.8305 | 0.4322 | 0.8305 | 0.9113 |
| No log | 1.3913 | 32 | 0.9287 | 0.3575 | 0.9287 | 0.9637 |
| No log | 1.4783 | 34 | 0.8908 | 0.4412 | 0.8908 | 0.9438 |
| No log | 1.5652 | 36 | 0.8175 | 0.4785 | 0.8175 | 0.9042 |
| No log | 1.6522 | 38 | 0.8353 | 0.2279 | 0.8353 | 0.9140 |
| No log | 1.7391 | 40 | 0.7903 | 0.2933 | 0.7903 | 0.8890 |
| No log | 1.8261 | 42 | 0.7185 | 0.5690 | 0.7185 | 0.8477 |
| No log | 1.9130 | 44 | 0.6778 | 0.6429 | 0.6778 | 0.8233 |
| No log | 2.0 | 46 | 0.7545 | 0.6247 | 0.7545 | 0.8686 |
| No log | 2.0870 | 48 | 0.8029 | 0.5550 | 0.8029 | 0.8960 |
| No log | 2.1739 | 50 | 0.8353 | 0.5550 | 0.8353 | 0.9140 |
| No log | 2.2609 | 52 | 0.6523 | 0.6456 | 0.6523 | 0.8076 |
| No log | 2.3478 | 54 | 0.5705 | 0.6307 | 0.5705 | 0.7553 |
| No log | 2.4348 | 56 | 0.5903 | 0.5918 | 0.5903 | 0.7683 |
| No log | 2.5217 | 58 | 0.5393 | 0.6657 | 0.5393 | 0.7344 |
| No log | 2.6087 | 60 | 0.7200 | 0.6247 | 0.7200 | 0.8485 |
| No log | 2.6957 | 62 | 0.8989 | 0.5396 | 0.8989 | 0.9481 |
| No log | 2.7826 | 64 | 0.8626 | 0.5550 | 0.8626 | 0.9288 |
| No log | 2.8696 | 66 | 0.7073 | 0.6094 | 0.7073 | 0.8410 |
| No log | 2.9565 | 68 | 0.6778 | 0.6094 | 0.6778 | 0.8233 |
| No log | 3.0435 | 70 | 0.6908 | 0.6456 | 0.6908 | 0.8312 |
| No log | 3.1304 | 72 | 0.7220 | 0.6456 | 0.7220 | 0.8497 |
| No log | 3.2174 | 74 | 0.6952 | 0.6462 | 0.6952 | 0.8338 |
| No log | 3.3043 | 76 | 0.8119 | 0.6888 | 0.8119 | 0.9011 |
| No log | 3.3913 | 78 | 1.0723 | 0.5520 | 1.0723 | 1.0355 |
| No log | 3.4783 | 80 | 1.0900 | 0.5520 | 1.0900 | 1.0440 |
| No log | 3.5652 | 82 | 0.8989 | 0.5741 | 0.8989 | 0.9481 |
| No log | 3.6522 | 84 | 0.9276 | 0.5463 | 0.9276 | 0.9631 |
| No log | 3.7391 | 86 | 0.7766 | 0.6086 | 0.7766 | 0.8812 |
| No log | 3.8261 | 88 | 0.7771 | 0.6086 | 0.7771 | 0.8815 |
| No log | 3.9130 | 90 | 0.8783 | 0.5800 | 0.8783 | 0.9372 |
| No log | 4.0 | 92 | 1.1123 | 0.5247 | 1.1123 | 1.0547 |
| No log | 4.0870 | 94 | 1.1309 | 0.5247 | 1.1309 | 1.0635 |
| No log | 4.1739 | 96 | 1.0244 | 0.5174 | 1.0244 | 1.0121 |
| No log | 4.2609 | 98 | 0.7676 | 0.6610 | 0.7676 | 0.8761 |
| No log | 4.3478 | 100 | 0.6071 | 0.6306 | 0.6071 | 0.7792 |
| No log | 4.4348 | 102 | 0.6411 | 0.5170 | 0.6411 | 0.8007 |
| No log | 4.5217 | 104 | 0.6206 | 0.6667 | 0.6206 | 0.7878 |
| No log | 4.6087 | 106 | 0.6848 | 0.7108 | 0.6848 | 0.8275 |
| No log | 4.6957 | 108 | 0.7176 | 0.6909 | 0.7176 | 0.8471 |
| No log | 4.7826 | 110 | 0.7236 | 0.6888 | 0.7236 | 0.8506 |
| No log | 4.8696 | 112 | 0.7618 | 0.6360 | 0.7618 | 0.8728 |
| No log | 4.9565 | 114 | 0.7076 | 0.7314 | 0.7076 | 0.8412 |
| No log | 5.0435 | 116 | 0.8010 | 0.6360 | 0.8010 | 0.8950 |
| No log | 5.1304 | 118 | 0.9730 | 0.4792 | 0.9730 | 0.9864 |
| No log | 5.2174 | 120 | 0.9684 | 0.4792 | 0.9684 | 0.9840 |
| No log | 5.3043 | 122 | 0.7552 | 0.6086 | 0.7552 | 0.8690 |
| No log | 5.3913 | 124 | 0.5826 | 0.7254 | 0.5826 | 0.7633 |
| No log | 5.4783 | 126 | 0.5498 | 0.7494 | 0.5498 | 0.7415 |
| No log | 5.5652 | 128 | 0.5507 | 0.6977 | 0.5507 | 0.7421 |
| No log | 5.6522 | 130 | 0.5834 | 0.7108 | 0.5834 | 0.7638 |
| No log | 5.7391 | 132 | 0.6467 | 0.6767 | 0.6467 | 0.8042 |
| No log | 5.8261 | 134 | 0.6673 | 0.6888 | 0.6673 | 0.8169 |
| No log | 5.9130 | 136 | 0.6672 | 0.6888 | 0.6672 | 0.8168 |
| No log | 6.0 | 138 | 0.6075 | 0.7072 | 0.6075 | 0.7794 |
| No log | 6.0870 | 140 | 0.6295 | 0.7072 | 0.6295 | 0.7934 |
| No log | 6.1739 | 142 | 0.5756 | 0.7418 | 0.5756 | 0.7587 |
| No log | 6.2609 | 144 | 0.5584 | 0.7136 | 0.5584 | 0.7473 |
| No log | 6.3478 | 146 | 0.5662 | 0.7136 | 0.5662 | 0.7524 |
| No log | 6.4348 | 148 | 0.6104 | 0.7418 | 0.6104 | 0.7813 |
| No log | 6.5217 | 150 | 0.7599 | 0.6058 | 0.7599 | 0.8717 |
| No log | 6.6087 | 152 | 0.7832 | 0.6086 | 0.7832 | 0.8850 |
| No log | 6.6957 | 154 | 0.6875 | 0.7241 | 0.6875 | 0.8291 |
| No log | 6.7826 | 156 | 0.6174 | 0.7386 | 0.6174 | 0.7857 |
| No log | 6.8696 | 158 | 0.6159 | 0.7386 | 0.6159 | 0.7848 |
| No log | 6.9565 | 160 | 0.6314 | 0.7430 | 0.6314 | 0.7946 |
| No log | 7.0435 | 162 | 0.6826 | 0.7418 | 0.6826 | 0.8262 |
| No log | 7.1304 | 164 | 0.7130 | 0.7363 | 0.7130 | 0.8444 |
| No log | 7.2174 | 166 | 0.7266 | 0.7314 | 0.7266 | 0.8524 |
| No log | 7.3043 | 168 | 0.7501 | 0.6215 | 0.7501 | 0.8661 |
| No log | 7.3913 | 170 | 0.8049 | 0.5678 | 0.8049 | 0.8972 |
| No log | 7.4783 | 172 | 0.8183 | 0.5678 | 0.8183 | 0.9046 |
| No log | 7.5652 | 174 | 0.7941 | 0.5678 | 0.7941 | 0.8911 |
| No log | 7.6522 | 176 | 0.7117 | 0.6767 | 0.7117 | 0.8436 |
| No log | 7.7391 | 178 | 0.6850 | 0.7108 | 0.6850 | 0.8277 |
| No log | 7.8261 | 180 | 0.6689 | 0.7108 | 0.6689 | 0.8179 |
| No log | 7.9130 | 182 | 0.6671 | 0.7108 | 0.6671 | 0.8168 |
| No log | 8.0 | 184 | 0.7045 | 0.6767 | 0.7045 | 0.8393 |
| No log | 8.0870 | 186 | 0.7538 | 0.6606 | 0.7538 | 0.8682 |
| No log | 8.1739 | 188 | 0.7722 | 0.6606 | 0.7722 | 0.8788 |
| No log | 8.2609 | 190 | 0.7324 | 0.6606 | 0.7324 | 0.8558 |
| No log | 8.3478 | 192 | 0.6980 | 0.6767 | 0.6980 | 0.8354 |
| No log | 8.4348 | 194 | 0.6929 | 0.7072 | 0.6929 | 0.8324 |
| No log | 8.5217 | 196 | 0.7120 | 0.7363 | 0.7120 | 0.8438 |
| No log | 8.6087 | 198 | 0.7231 | 0.7363 | 0.7231 | 0.8504 |
| No log | 8.6957 | 200 | 0.7581 | 0.6753 | 0.7581 | 0.8707 |
| No log | 8.7826 | 202 | 0.8078 | 0.5800 | 0.8078 | 0.8988 |
| No log | 8.8696 | 204 | 0.8640 | 0.5678 | 0.8640 | 0.9295 |
| No log | 8.9565 | 206 | 0.8883 | 0.5678 | 0.8883 | 0.9425 |
| No log | 9.0435 | 208 | 0.8683 | 0.5678 | 0.8683 | 0.9318 |
| No log | 9.1304 | 210 | 0.8356 | 0.5678 | 0.8356 | 0.9141 |
| No log | 9.2174 | 212 | 0.7904 | 0.5800 | 0.7904 | 0.8890 |
| No log | 9.3043 | 214 | 0.7546 | 0.6606 | 0.7546 | 0.8687 |
| No log | 9.3913 | 216 | 0.7491 | 0.6606 | 0.7491 | 0.8655 |
| No log | 9.4783 | 218 | 0.7529 | 0.6606 | 0.7529 | 0.8677 |
| No log | 9.5652 | 220 | 0.7578 | 0.6603 | 0.7578 | 0.8705 |
| No log | 9.6522 | 222 | 0.7560 | 0.6603 | 0.7560 | 0.8695 |
| No log | 9.7391 | 224 | 0.7537 | 0.6603 | 0.7537 | 0.8682 |
| No log | 9.8261 | 226 | 0.7545 | 0.6603 | 0.7545 | 0.8686 |
| No log | 9.9130 | 228 | 0.7550 | 0.6603 | 0.7550 | 0.8689 |
| No log | 10.0 | 230 | 0.7544 | 0.6603 | 0.7544 | 0.8685 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Subsets and Splits