modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-17 00:37:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 428
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-17 00:33:35
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
OpenBuddy/openbuddy-qwen1.5-32b-v21.1-32k | OpenBuddy | "2024-04-09T13:27:25Z" | 3,026 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-09T11:35:08Z" | ---
license: other
license_name: tongyi-qianwen-license-agreement
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-14B/blob/39b74a78357df4d2296e838d87565967d663a67a/LICENSE
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/Qwen/Qwen1.5-32B
License: Qwen: https://huggingface.co/Qwen/Qwen1.5-14B/blob/39b74a78357df4d2296e838d87565967d663a67a/LICENSE
# Prompt Format
We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`.
```
<|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
You cannot access the internet, but you have vast knowledge, cutoff: 2023-04.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|>
<|role|>user<|says|>History input 1<|end|>
<|role|>assistant<|says|>History output 1<|end|>
<|role|>user<|says|>History input 2<|end|>
<|role|>assistant<|says|>History output 2<|end|>
<|role|>user<|says|>Current input<|end|>
<|role|>assistant<|says|>
```
This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
camidenecken/RoBERTa-RM1-v1-5-rm-v8 | camidenecken | "2024-11-14T17:47:39Z" | 180 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-14T17:47:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aaronx/testmodel | Aaronx | "2024-03-18T09:51:20Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2023-11-16T08:46:31Z" | ---
license: apache-2.0
---
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
<img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg">
</picture> |
mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF | mradermacher | "2024-11-22T07:47:47Z" | 19 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SzilviaB/DarkUnholyDareDevil-abliterated-8b",
"base_model:quantized:SzilviaB/DarkUnholyDareDevil-abliterated-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-22T07:05:50Z" | ---
base_model: SzilviaB/DarkUnholyDareDevil-abliterated-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/SzilviaB/DarkUnholyDareDevil-abliterated-8b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DarkUnholyDareDevil-abliterated-8b-GGUF/resolve/main/DarkUnholyDareDevil-abliterated-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF | mradermacher | "2025-03-02T10:56:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"en",
"base_model:BounharAbdelaziz/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3",
"base_model:quantized:BounharAbdelaziz/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-02T10:51:06Z" | ---
base_model: BounharAbdelaziz/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BounharAbdelaziz/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3-GGUF/resolve/main/Al-Atlas-LLM-0.5B-bs-4-lr-5e-05-ep-3-wp-0.1-gacc-32-gnm-1.0-FP16-mx-2048-v2.3.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Sakalti/SJT-7B-1M | Sakalti | "2025-01-27T03:52:06Z" | 17 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:Sakalti/model-3",
"base_model:merge:Sakalti/model-3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-27T03:48:53Z" | ---
base_model:
- Qwen/Qwen2.5-7B-Instruct-1M
- Sakalti/model-3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) as a base.
### Models Merged
The following models were included in the merge:
* [Sakalti/model-3](https://huggingface.co/Sakalti/model-3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sakalti/model-3
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-7B-Instruct-1M
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: float16
```
|
Davada/subnet6 | Davada | "2024-03-11T00:08:11Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-10T23:32:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_zika_gpt4o_4_2e-5_16_weight | isspek | "2025-03-02T18:29:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-02T18:29:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kyelSteensma/idefics-9b-pokemon | kyelSteensma | "2024-01-31T05:25:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-31T05:25:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pai123/QWen-1.5B-System1-Ver3-16bgguf | pai123 | "2025-03-18T12:49:43Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-18T12:49:17Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pai123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
faodl/actionable-xlm-roberta-large-f1_weighted | faodl | "2024-10-31T02:16:14Z" | 127 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-31T02:14:30Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
albertus-sussex/veriscrape-sbert-university-reference_8_to_verify_2-fold-8 | albertus-sussex | "2025-04-01T00:33:45Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11532",
"loss:TripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-04-01T00:33:03Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:11532
- loss:TripletLoss
base_model: Alibaba-NLP/gte-base-en-v1.5
widget:
- source_sentence: Private
sentences:
- public
- University of Illinois--Urbana-Champaign
- name
- type
- source_sentence: 'Official telephone: (865) 981-8000'
sentences:
- phone
- http://www.ucumberlands.edu
- (806) 291-3500
- website
- source_sentence: < 2-year, Private not-for-profit
sentences:
- Public
- type
- http://www.ashland-rtc.org
- website
- source_sentence: Public
sentences:
- Private for-profit
- website
- http://www.newschool.edu
- type
- source_sentence: 800 921-7399 (toll free)
sentences:
- type
- 'Official telephone: (859) 246-3300'
- phone
- public
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- silhouette_cosine
- silhouette_euclidean
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- task:
type: silhouette
name: Silhouette
dataset:
name: Unknown
type: unknown
metrics:
- type: silhouette_cosine
value: 0.9623849987983704
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.8306067585945129
name: Silhouette Euclidean
- type: silhouette_cosine
value: 0.9622072577476501
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.8319014310836792
name: Silhouette Euclidean
---
# SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) <!-- at revision a829fd0e060bb84554da0dfd354d0de0f7712b7f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-university-reference_8_to_verify_2-fold-8")
# Run inference
sentences = [
'800 921-7399 (toll free)',
'Official telephone: (859) 246-3300',
'public',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.9624** |
| silhouette_euclidean | 0.8306 |
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.9622** |
| silhouette_euclidean | 0.8319 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 11,532 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 8.34 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.27 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.83 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:-----------------------------------------------------|:-----------------------------------------------------|:----------------------------------|:------------------|:---------------------|
| <code>Private not-for-profit- 4-year or above</code> | <code>Private not-for-profit- 4-year or above</code> | <code>619 660-4000</code> | <code>type</code> | <code>phone</code> |
| <code>University of Wisconsin--Platteville</code> | <code>Seton Hall University</code> | <code>www.marquette.edu</code> | <code>name</code> | <code>website</code> |
| <code>PRIVATE</code> | <code>< 2-year, Private for-profit</code> | <code>www.lcjvs.com/adult/</code> | <code>type</code> | <code>website</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,282 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 8.41 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.32 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.04 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:------------------|:---------------------|
| <code>The Hartford Conservatory</code> | <code>Minnesota State University--Mankato</code> | <code>http://www.hartfordconservatory.org</code> | <code>name</code> | <code>website</code> |
| <code>Spencerian College</code> | <code>University of Washington</code> | <code>college, private (nonprofit)</code> | <code>name</code> | <code>type</code> |
| <code>Lincoln Land Community College</code> | <code>Wilson College</code> | <code>college, private (nonprofit)</code> | <code>name</code> | <code>type</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | silhouette_cosine |
|:-----:|:----:|:-------------:|:---------------:|:---------------:|:-----------------:|
| -1 | -1 | - | - | 0.7090 | 0.2738 |
| 1.0 | 91 | 0.3294 | 0.0 | 1.0 | 0.9375 |
| 2.0 | 182 | 0.0003 | 0.0 | 1.0 | 0.9601 |
| 3.0 | 273 | 0.0 | 0.0 | 1.0 | 0.9629 |
| 4.0 | 364 | 0.0 | 0.0 | 1.0 | 0.9624 |
| 5.0 | 455 | 0.0 | 0.0 | 1.0 | 0.9624 |
| -1 | -1 | - | - | 1.0 | 0.9622 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.0.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.5.2
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
albertus-sussex/veriscrape-sbert-book-reference_7_to_verify_3-fold-9 | albertus-sussex | "2025-03-23T23:09:29Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:17748",
"loss:AttributeTripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-23T23:08:40Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:17748
- loss:AttributeTripletLoss
base_model: Alibaba-NLP/gte-base-en-v1.5
widget:
- source_sentence: ': St. Martin''s Press'
sentences:
- title
- publisher
- The Oxford Book of Christmas Stories
- Paternoster Press
- source_sentence: Simon & Schuster Ltd
sentences:
- publisher
- title
- 'Another Life: A Memoir of Other People'
- Skyhorse Publishing (March 8, 2010)
- source_sentence: Richard Templar
sentences:
- Catherine Saunders
- author
- publisher
- Thomas Nelson
- source_sentence: Atria; First edition (June 30, 2009)
sentences:
- publication_date
- publisher
- Atria (January 4, 2005)
- Awakenings Publications; illustrated edition edition (March 30, 2006)
- source_sentence: Michael Henderson
sentences:
- Colleen Belk
- title
- author
- David Bowie
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- silhouette_cosine
- silhouette_euclidean
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.9832656979560852
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9822080135345459
name: Cosine Accuracy
- task:
type: silhouette
name: Silhouette
dataset:
name: Unknown
type: unknown
metrics:
- type: silhouette_cosine
value: 0.7657725811004639
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.621295154094696
name: Silhouette Euclidean
- type: silhouette_cosine
value: 0.768454372882843
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.6237089037895203
name: Silhouette Euclidean
---
# SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) <!-- at revision a829fd0e060bb84554da0dfd354d0de0f7712b7f -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-book-reference_7_to_verify_3-fold-9")
# Run inference
sentences = [
'Michael Henderson',
'Colleen Belk',
'David Bowie',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9833** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.7658** |
| silhouette_euclidean | 0.6213 |
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9822** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.7685** |
| silhouette_euclidean | 0.6237 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 17,748 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.84 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.68 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.25 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.76 tokens</li><li>max: 5 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.83 tokens</li><li>max: 5 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:---------------------------|:---------------------------|:-----------------------------------------------------------|:------------------------------|:---------------------|
| <code>9780375815164</code> | <code>9780425205594</code> | <code>Greetings from Nowhere (Frances Foster Books)</code> | <code>isbn_13</code> | <code>title</code> |
| <code>9781616794514</code> | <code>9780451528810</code> | <code>Greetings from Nowhere (Frances Foster Books)</code> | <code>isbn_13</code> | <code>title</code> |
| <code>14/01/2010</code> | <code>5/1/2010</code> | <code>: 9781416552963</code> | <code>publication_date</code> | <code>isbn_13</code> |
* Loss: <code>veriscrape.training.AttributeTripletLoss</code> with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,972 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.71 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 7.51 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.28 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.76 tokens</li><li>max: 5 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.77 tokens</li><li>max: 5 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:----------------------------------------------------------------------------------|:-----------------------------|:------------------------------------------------------------|:------------------------------|:------------------------------|
| <code>4/10/2007</code> | <code>6/17/2008</code> | <code>Bob J. Moore</code> | <code>publication_date</code> | <code>author</code> |
| <code>Ultraprevention: The 6-Week Plan That Will Make You Healthy for Life</code> | <code>Plantation</code> | <code>Houghton Mifflin Harcourt (September 25, 2006)</code> | <code>title</code> | <code>publication_date</code> |
| <code>Terence Blacker</code> | <code>Johanna Lindsey</code> | <code>Blackstaff Press Ltd</code> | <code>author</code> | <code>publisher</code> |
* Loss: <code>veriscrape.training.AttributeTripletLoss</code> with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | silhouette_cosine |
|:-----:|:----:|:-------------:|:---------------:|:---------------:|:-----------------:|
| -1 | -1 | - | - | 0.3808 | 0.1112 |
| 1.0 | 139 | 1.0803 | 0.2716 | 0.9812 | 0.7143 |
| 2.0 | 278 | 0.1095 | 0.2732 | 0.9807 | 0.7807 |
| 3.0 | 417 | 0.0752 | 0.2309 | 0.9817 | 0.7675 |
| 4.0 | 556 | 0.0564 | 0.2115 | 0.9843 | 0.7816 |
| 5.0 | 695 | 0.0448 | 0.2208 | 0.9833 | 0.7658 |
| -1 | -1 | - | - | 0.9822 | 0.7685 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.4.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.5.2
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### AttributeTripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RazzzHF/creepy-diffusion | RazzzHF | "2023-01-05T19:54:13Z" | 0 | 4 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-01-05T18:47:37Z" | ---
license: creativeml-openrail-m
---
This model has a creepy bias producing great horror picture with great fidelity.
You don't need any specific trigger words. Any horror related prompt will result in a strong level of creep.
It's working great in 512x512 and 768x768.
Creepy images examples:
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png) |
sqrk/All_balanced-lang_tag-whisper-lg-3-Nov30 | sqrk | "2024-11-30T20:29:31Z" | 9 | 0 | null | [
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | "2024-11-30T09:37:27Z" | ---
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: All_balanced-lang_tag-whisper-lg-3-Nov30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# All_balanced-lang_tag-whisper-lg-3-Nov30
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2030
- Wer: 18.0679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 1.0883 | 0.3210 | 100 | 0.5905 | 33.2422 |
| 0.4857 | 0.6421 | 200 | 0.4462 | 26.5892 |
| 0.3709 | 0.9631 | 300 | 0.3049 | 27.3639 |
| 0.1935 | 1.2841 | 400 | 0.2699 | 22.2602 |
| 0.1615 | 1.6051 | 500 | 0.2412 | 21.6906 |
| 0.1504 | 1.9262 | 600 | 0.2297 | 23.1032 |
| 0.0921 | 2.2472 | 700 | 0.2316 | 20.8931 |
| 0.0736 | 2.5682 | 800 | 0.2132 | 19.8679 |
| 0.0782 | 2.8892 | 900 | 0.2108 | 22.6475 |
| 0.0555 | 3.2103 | 1000 | 0.2226 | 19.4577 |
| 0.0489 | 3.5313 | 1100 | 0.2099 | 20.5742 |
| 0.0418 | 3.8523 | 1200 | 0.2068 | 19.9134 |
| 0.0364 | 4.1734 | 1300 | 0.2309 | 22.5564 |
| 0.0296 | 4.4944 | 1400 | 0.2175 | 22.5564 |
| 0.0285 | 4.8154 | 1500 | 0.2040 | 19.3210 |
| 0.0213 | 5.1364 | 1600 | 0.2037 | 18.6147 |
| 0.0156 | 5.4575 | 1700 | 0.2159 | 18.6375 |
| 0.0172 | 5.7785 | 1800 | 0.2068 | 19.0704 |
| 0.0183 | 6.0995 | 1900 | 0.2134 | 18.2046 |
| 0.0184 | 6.4205 | 2000 | 0.2085 | 18.1362 |
| 0.0142 | 6.7416 | 2100 | 0.1998 | 17.4755 |
| 0.0163 | 7.0626 | 2200 | 0.2059 | 18.1590 |
| 0.009 | 7.3836 | 2300 | 0.1967 | 18.3185 |
| 0.012 | 7.7047 | 2400 | 0.1976 | 17.5894 |
| 0.0119 | 8.0257 | 2500 | 0.1894 | 19.5944 |
| 0.0085 | 8.3467 | 2600 | 0.1961 | 18.4780 |
| 0.0059 | 8.6677 | 2700 | 0.2018 | 17.3844 |
| 0.0068 | 8.9888 | 2800 | 0.1821 | 17.5439 |
| 0.0056 | 9.3098 | 2900 | 0.1996 | 18.0451 |
| 0.0053 | 9.6308 | 3000 | 0.2143 | 17.8856 |
| 0.0077 | 9.9518 | 3100 | 0.1810 | 16.4502 |
| 0.0069 | 10.2729 | 3200 | 0.1873 | 17.3160 |
| 0.0076 | 10.5939 | 3300 | 0.1897 | 18.6375 |
| 0.0095 | 10.9149 | 3400 | 0.2144 | 18.6147 |
| 0.0051 | 11.2360 | 3500 | 0.2006 | 17.2477 |
| 0.0085 | 11.5570 | 3600 | 0.2106 | 17.0198 |
| 0.013 | 11.8780 | 3700 | 0.2030 | 18.0679 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.4.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
rajparikh03/safety-vector | rajparikh03 | "2025-04-07T11:06:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T10:51:19Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
tenebrisu/speecht5_tts_common_voice_uk | tenebrisu | "2024-11-14T23:09:44Z" | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"text-to-speech",
"uk",
"dataset:common_voice",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-07-03T19:06:58Z" | ---
datasets:
- common_voice
language:
- uk
license: mit
base_model: microsoft/speecht5_tts
pipeline_tag: text-to-speech
---
This model is a fine-tuned version of SpeechT5 for the Ukrainian language, using the Common Voice dataset.
## Usage:
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("tenebrisu/speecht5_tts_common_voice_uk")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
speaker_embeddings = 2 * torch.rand((1, 512)) - 1
text = """ pryvit yak spravy """
inputs = processor(text=text, return_tensors="pt")
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, output_cross_attentions=True)
waveform = vocoder.forward(speech[0])
```
## Transliteration Table:
To support the transliteration of Ukrainian characters, the following table is used:
| Ukrainian | Transliteration |
|-----------|------------------|
| Є | je |
| І | i |
| Ї | ji |
| А | a |
| Б | b |
| В | v |
| Г | h |
| Д | d |
| Е | e |
| Ж | zh |
| З | z |
| И | y |
| Й | j |
| К | k |
| Л | l |
| М | m |
| Н | n |
| О | o |
| П | p |
| Р | r |
| С | s |
| Т | t |
| У | u |
| Ф | f |
| Х | x |
| Ц | c |
| Ч | ch |
| Ш | sh |
| Щ | shch |
| Ь | q |
| Ю | ju |
| Я | ja |
| а | a |
| б | b |
| в | v |
| г | h |
| д | d |
| е | e |
| ж | zh |
| з | z |
| и | y |
| й | j |
| к | k |
| л | l |
| м | m |
| н | n |
| о | o |
| п | p |
| р | r |
| с | s |
| т | t |
| у | u |
| ф | f |
| х | x |
| ц | c |
| ч | ch |
| ш | sh |
| щ | shch |
| ь | q |
| ю | ju |
| я | ja |
| є | je |
| і | i |
| ї | ji |
| Ґ | g |
| ґ | g | |
Weyaxi/Bagel-Hermes-2x34B | Weyaxi | "2024-06-27T08:30:30Z" | 182 | 16 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-12T16:02:03Z" | ---
tags:
- yi
- moe
model-index:
- name: Bagel-Hermes-2x34b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
license: apache-2.0
---

# Bagel-Hermes-2x34B
This is the model for Bagel-Hermes-2x34B. I used [this repo](https://bit.ly/weyaxi-moe-repo) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, and [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) uses ChatML, you can utilize ChatML and other prompt templates provided by bagel.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Bagel-Hermes-2x34B-GPTQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GPTQ)
##### GGUF
- [TheBloke/Bagel-Hermes-2x34B-GGUF](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GGUF)
##### AWQ
- [TheBloke/Bagel-Hermes-2x34B-AWQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Bagel-Hermes-2x34b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.10|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |85.26|
|MMLU (5-Shot) |77.24|
|TruthfulQA (0-shot) |64.82|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |68.69|
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi) |
imagineaiuser/OrpoLlama-3-8B | imagineaiuser | "2024-04-29T19:25:08Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-29T19:20:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jojoUla/bert-large-cased-sigir-support-no-label-20 | jojoUla | "2023-02-08T05:08:43Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-02-07T14:49:21Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-no-label-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-no-label-20
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7629 | 1.0 | 246 | 2.2876 |
| 2.2004 | 2.0 | 492 | 1.9698 |
| 1.9011 | 3.0 | 738 | 1.8034 |
| 1.7521 | 4.0 | 984 | 1.7313 |
| 1.6405 | 5.0 | 1230 | 1.6195 |
| 1.553 | 6.0 | 1476 | 1.5437 |
| 1.4707 | 7.0 | 1722 | 1.5072 |
| 1.398 | 8.0 | 1968 | 1.4477 |
| 1.3563 | 9.0 | 2214 | 1.4426 |
| 1.3085 | 10.0 | 2460 | 1.4250 |
| 1.2678 | 11.0 | 2706 | 1.3580 |
| 1.2255 | 12.0 | 2952 | 1.3553 |
| 1.1901 | 13.0 | 3198 | 1.3094 |
| 1.1656 | 14.0 | 3444 | 1.2731 |
| 1.1371 | 15.0 | 3690 | 1.3012 |
| 1.1131 | 16.0 | 3936 | 1.2850 |
| 1.0945 | 17.0 | 4182 | 1.2473 |
| 1.0774 | 18.0 | 4428 | 1.2770 |
| 1.0531 | 19.0 | 4674 | 1.2285 |
| 1.0608 | 20.0 | 4920 | 1.2645 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
denbeo/253d5082-110b-4562-b40e-1f57407a4fcf | denbeo | "2025-01-14T07:39:21Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-14T06:02:13Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 253d5082-110b-4562-b40e-1f57407a4fcf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 40dae4fa66209106_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/40dae4fa66209106_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/253d5082-110b-4562-b40e-1f57407a4fcf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/40dae4fa66209106_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 829bf80f-718f-42d6-9eb3-f0195f1b690b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 829bf80f-718f-42d6-9eb3-f0195f1b690b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 253d5082-110b-4562-b40e-1f57407a4fcf
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5103 | 0.0423 | 200 | 1.3114 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
brixeus/bceb642e-b477-4cc4-b60c-49e6ffb822f5 | brixeus | "2025-01-26T03:47:35Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-26T03:42:19Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bceb642e-b477-4cc4-b60c-49e6ffb822f5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 312e15ae347cedbc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/312e15ae347cedbc_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: brixeus/bceb642e-b477-4cc4-b60c-49e6ffb822f5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/312e15ae347cedbc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 3d54e57c-f395-4e6d-b663-403d21a2587f
wandb_project: Gradients-On-Three
wandb_run: your_name
wandb_runid: 3d54e57c-f395-4e6d-b663-403d21a2587f
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# bceb642e-b477-4cc4-b60c-49e6ffb822f5
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0023 | 1 | 2.2822 |
| 2.2274 | 0.0205 | 9 | 2.2534 |
| 2.2166 | 0.0410 | 18 | 2.1828 |
| 2.1907 | 0.0614 | 27 | 2.1499 |
| 2.0656 | 0.0819 | 36 | 2.1304 |
| 2.0424 | 0.1024 | 45 | 2.1185 |
| 2.0353 | 0.1229 | 54 | 2.1104 |
| 1.8425 | 0.1433 | 63 | 2.1055 |
| 2.2262 | 0.1638 | 72 | 2.1027 |
| 2.0917 | 0.1843 | 81 | 2.1009 |
| 2.1066 | 0.2048 | 90 | 2.1004 |
| 2.0233 | 0.2253 | 99 | 2.1003 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_1_2 | LaLegumbreArtificial | "2025-02-20T20:21:15Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-02-20T20:05:38Z" | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SPIE_MULTICLASS_MICROSOFT_1_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPIE_MULTICLASS_MICROSOFT_1_2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1168
- Accuracy: 0.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3786 | 0.9886 | 65 | 0.3010 | 0.8942 |
| 0.1822 | 1.9924 | 131 | 0.1648 | 0.9383 |
| 0.1725 | 2.9962 | 197 | 0.1339 | 0.9533 |
| 0.1198 | 4.0 | 263 | 0.0962 | 0.97 |
| 0.1082 | 4.9430 | 325 | 0.1168 | 0.9592 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
v000000/NM-12B-Lyris-dev-3 | v000000 | "2024-09-10T19:52:14Z" | 13 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:Sao10K/MN-12B-Lyra-v1",
"base_model:merge:Sao10K/MN-12B-Lyra-v1",
"base_model:Sao10K/MN-12B-Lyra-v3",
"base_model:merge:Sao10K/MN-12B-Lyra-v3",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:merge:unsloth/Mistral-Nemo-Instruct-2407",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-05T21:19:18Z" | ---
base_model:
- Sao10K/MN-12B-Lyra-v1
- Sao10K/MN-12B-Lyra-v3
- unsloth/Mistral-Nemo-Instruct-2407
library_name: transformers
tags:
- merge
- mistral
license: cc-by-nc-4.0
---
Lyris-dev3-Mistral-Nemo-12B-2407
------------------------------------------------------------------

*EXPERIMENTAL*
attempt to fix Sao10k's Lyra-V3 prompt format and stop token >and boost smarts. with strategic *LATCOS* vector similarity merging
prototype, unfinished, dev3
- Sao10K/MN-12B-Lyra-v1 <b>*Base*</b>
- Sao10K/MN-12B-Lyra-v3 <b>*x2 Sequential PASS, order: 1, 3*</b>
- unsloth/Mistral-Nemo-Instruct-2407 <b>*x2 Sequential PASS, order: 2, 4*</b>
- with z0.0001 value
# <b>Prompt format:</b>
*Mistral Instruct*
```
[INST] System Message [/INST]
[INST] Name: Let's get started. Please respond based on the information and instructions provided above. [/INST]
<s>[INST] Name: What is your favourite condiment? [/INST]
AssistantName: Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s>
[INST] Name: Do you have mayonnaise recipes? [/INST]
``` |
matteomarjanovic/flatsketcher | matteomarjanovic | "2025-02-08T16:00:44Z" | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-02-08T15:36:00Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
The technical flat sketch shows a sleeveless top with a sophisticated
architectural design. The front view features a dramatic V-neckline that
plunges deeply, created by overlapping panels. The bodice maintains a clean,
fitted silhouette through the torso. At the waist, there's a distinct
horizontal band or waistband with a moderate width, likely 2-3 inches, that
sits at the natural waistline. The most distinctive detail is an
asymmetrical draped panel that extends from the left side of the waistband,
falling below the hip line in a diagonal sweep. This panel appears to be
constructed from the same fabric as the main body and creates an intentional
gathered effect where it connects to the waistband. The back view would
mirror the clean lines of the front bodice, maintaining the sleeveless
armholes with clean finished edges. The V-neckline on the back would be
higher than the front for structural integrity. The waistband continues
around the entire circumference of the garment with consistent width. The
asymmetrical draped panel visible from the front would be partially visible
from the back view, showing how it wraps around the left side of the
garment. Construction details would include darts or princess seams for
fitting, though these are minimally visible to maintain the clean aesthetic.
The garment likely closes with an invisible zipper at the center back seam
above the waistband for ease of dressing. In the style of FLTSKC
output:
url: images/replicate-prediction-nh84hee4bsrm80cmwq28sws73c.webp
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: FLTSKC
license: apache-2.0
---
# img2flat
<Gallery />
## Model description
A LoRA that creates technical fashion flat sketches.
## Trigger words
You should use `FLTSKC` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/matteomarjanovic/img2flat/tree/main) them in the Files & versions tab.
|
jssky/25816419-4f20-48f7-8e2f-7c9fb70b7897 | jssky | "2025-04-15T05:46:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-15T05:25:36Z" | Internal Error - We're working hard to fix this as soon as possible! |
tinycompany/Llamaify-inf-l-3B | tinycompany | "2025-04-07T15:13:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T15:03:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
S2312dal/M6_MLM_cross | S2312dal | "2022-06-18T09:44:44Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-18T08:51:56Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: M6_MLM_cross
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M6_MLM_cross
This model is a fine-tuned version of [S2312dal/M6_MLM](https://huggingface.co/S2312dal/M6_MLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0197
- Pearson: 0.9680
- Spearmanr: 0.9098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8.0
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0723 | 1.0 | 131 | 0.0646 | 0.8674 | 0.8449 |
| 0.0433 | 2.0 | 262 | 0.0322 | 0.9475 | 0.9020 |
| 0.0015 | 3.0 | 393 | 0.0197 | 0.9680 | 0.9098 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mandardighe6/sdxl-lora-testing | mandardighe6 | "2025-04-09T07:23:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-09T07:23:31Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Iriraiai/donut_rus_pricetag_1 | Iriraiai | "2025-03-15T00:37:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-14T21:28:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dabrown/cc19fb13-0204-430d-96f4-5e344edaab60 | dabrown | "2025-02-28T11:02:33Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | "2025-02-28T06:35:26Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc19fb13-0204-430d-96f4-5e344edaab60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2ab5508654347f3c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2ab5508654347f3c_train_data.json
type:
field_instruction: user_prompt
field_output: resp
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/cc19fb13-0204-430d-96f4-5e344edaab60
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/2ab5508654347f3c_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 946903bb-e331-4529-be0a-a81d1c829510
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 946903bb-e331-4529-be0a-a81d1c829510
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cc19fb13-0204-430d-96f4-5e344edaab60
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1081
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6039 | 0.0009 | 1 | 0.8865 |
| 0.1576 | 0.2508 | 271 | 0.1417 |
| 0.0507 | 0.5017 | 542 | 0.1345 |
| 0.1386 | 0.7525 | 813 | 0.1234 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
deepseek-ai/DeepSeek-Prover-V1.5-Base | deepseek-ai | "2024-08-29T12:15:00Z" | 437 | 11 | null | [
"safetensors",
"llama",
"arxiv:2408.08152",
"license:other",
"region:us"
] | null | "2024-08-15T14:35:40Z" | ---
license: other
license_name: deepseek-license
license_link: LICENSE
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="#2-evaluation-results">Evaluation Results</a> |
<a href="#3-model-downloads">Model Download</a> |
<a href="#4-license">License</a> |
<a href="#5-citation">Citation</a> |
<a href="#6-contact">Contact</a>
</p>
<p align="center">
<a href="https://arxiv.org/abs/2408.08152"><b>Paper Link</b>👁️</a>
</p>
# DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search
## 1. Introduction
We introduce DeepSeek-Prover-V1.5, an open-source language model designed for theorem proving in Lean 4, which enhances DeepSeek-Prover-V1 by optimizing both training and inference processes. Pre-trained on DeepSeekMath-Base with specialization in formal mathematical languages, the model undergoes supervised fine-tuning using an enhanced formal theorem proving dataset derived from DeepSeek-Prover-V1. Further refinement is achieved through reinforcement learning from proof assistant feedback (RLPAF). Beyond the single-pass whole-proof generation approach of DeepSeek-Prover-V1, we propose RMaxTS, a variant of Monte-Carlo tree search that employs an intrinsic-reward-driven exploration strategy to generate diverse proof paths. DeepSeek-Prover-V1.5 demonstrates significant improvements over DeepSeek-Prover-V1, achieving new state-of-the-art results on the test set of the high school level miniF2F benchmark (63.5%) and the undergraduate level ProofNet benchmark (25.3%).
<p align="center">
<img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Prover-V1.5/blob/main/figures/performance.png?raw=true">
</p>
## 2. Evaluation Results
<div align="center">
| | miniF2F-test | ProofNet |
|--------|------------------|------------------|
| **ReProver** | 26.5% | 13.8% |
| **GPT-f** | 36.6% | - |
| **Hypertree Proof Search** | 41.0% | - |
| **InternLM2-StepProver** | 54.5% | 18.1% |
| **DeepSeek-Prover-V1** | 50.0% | - |
| **DeepSeek-Prover-V1.5-Base** | 42.2% | 13.2% |
| **DeepSeek-Prover-V1.5-SFT** | 57.4% | 22.9% |
| **DeepSeek-Prover-V1.5-RL** | 60.2% | 22.6% |
| **DeepSeek-Prover-V1.5-RL + RMaxTS** | **63.5%** | **25.3%** |
</div>
## 3. Model Downloads
We release the DeepSeek-Prover-V1.5 with 7B parameters, including base, SFT and RL models, to the public.
<div align="center">
| **Model** | **Download** |
| :-----------------------------: | :----------------------------------------------------------: |
| DeepSeek-Prover-V1.5-Base | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1.5-Base) |
| DeepSeek-Prover-V1.5-SFT | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1.5-SFT) |
| DeepSeek-Prover-V1.5-RL | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1.5-RL) |
</div>
## 4. License
This code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.
See the [LICENSE-CODE](LICENSE-CODE) and [LICENSE-MODEL](LICENSE-MODEL) for more details.
## 5. Citation
```latex
@article{xin2024deepseekproverv15harnessingproofassistant,
title={DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search},
author={Huajian Xin and Z. Z. Ren and Junxiao Song and Zhihong Shao and Wanjia Zhao and Haocheng Wang and Bo Liu and Liyue Zhang and Xuan Lu and Qiushi Du and Wenjun Gao and Qihao Zhu and Dejian Yang and Zhibin Gou and Z. F. Wu and Fuli Luo and Chong Ruan},
year={2024},
eprint={2408.08152},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.08152},
}
```
## 6. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
Harshraj8721/agri_finetuned_model-finetuned-batch-30-finetuned-batch-69 | Harshraj8721 | "2025-03-09T16:51:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:Harshraj8721/agri_finetuned_model-finetuned-batch-30",
"base_model:finetune:Harshraj8721/agri_finetuned_model-finetuned-batch-30",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-09T11:49:54Z" | ---
library_name: transformers
license: mit
base_model: Harshraj8721/agri_finetuned_model-finetuned-batch-30
tags:
- generated_from_trainer
model-index:
- name: agri_finetuned_model-finetuned-batch-30-finetuned-batch-69
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agri_finetuned_model-finetuned-batch-30-finetuned-batch-69
This model is a fine-tuned version of [Harshraj8721/agri_finetuned_model-finetuned-batch-30](https://huggingface.co/Harshraj8721/agri_finetuned_model-finetuned-batch-30) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 819 | 0.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
silverside/coke_style_redo | silverside | "2025-04-08T21:49:47Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-08T19:27:56Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: coke_style_redo
---
# Coke_Style_Redo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `coke_style_redo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "coke_style_redo",
"lora_weights": "https://huggingface.co/silverside/coke_style_redo/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('silverside/coke_style_redo', weight_name='lora.safetensors')
image = pipeline('coke_style_redo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0001
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/silverside/coke_style_redo/discussions) to add images that show off what you’ve made with this LoRA.
|
NCCUTAT/T5_nolora16 | NCCUTAT | "2025-03-26T05:47:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-26T05:46:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ybelkada/tiny-random-llama-Q4_K_M-GGUF | ybelkada | "2024-05-22T08:38:41Z" | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us"
] | null | "2024-05-22T08:38:40Z" | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# ybelkada/tiny-random-llama-Q4_K_M-GGUF
This model was converted to GGUF format from [`ybelkada/tiny-random-llama`](https://huggingface.co/ybelkada/tiny-random-llama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ybelkada/tiny-random-llama) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ybelkada/tiny-random-llama-Q4_K_M-GGUF --model tiny-random-llama.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ybelkada/tiny-random-llama-Q4_K_M-GGUF --model tiny-random-llama.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-random-llama.Q4_K_M.gguf -n 128
```
|
MaxyLee/DeepPerception-FGVR | MaxyLee | "2025-03-20T07:43:34Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2503.12797",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-11T09:41:49Z" | ---
base_model:
- Qwen/Qwen2-VL-7B-Instruct
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-text-to-text
library_name: transformers
---
# DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding
This is the official repository of **DeepPerception**, an MLLM enhanced with cognitive visual perception capabilities.
[Project Page](https://deepperception-kvg.github.io/)
[Paper](https://arxiv.org/abs/2503.12797)
## Overview
<p align="center">
<img src="figs/header.png" width="100%"></a><br>
Figure 1: (a) <strong>DeepPerception</strong> employs knowledge-driven reasoning to derive answers, while the baseline model directly outputs predictions without cognitive processing. (b) <strong>DeepPerception</strong> demonstrates superior cognitive visual perception capabilities that cannot be elicited in the foundation model through simplistic zero-shot CoT prompting.
</p>
#### Abstract
Human experts excel at fine-grained visual discrimination by leveraging domain knowledge to refine perceptual features, a capability that remains underdeveloped in current Multimodal Large Language Models (MLLMs). Despite possessing vast expert-level knowledge, MLLMs struggle to integrate reasoning into visual perception, often generating direct responses without deeper analysis.
To bridge this gap, we introduce knowledge-intensive visual grounding (KVG), a novel visual grounding task that requires both finegrained perception and domain-specific knowledge integration. To address the challenges of KVG, we propose **DeepPerception**, an MLLM enhanced with cognitive visual perception capabilities. Our approach consists of (1) an automated data synthesis pipeline that generates high-quality, knowledge-aligned training samples, and (2) a two-stage training framework combining supervised fine-tuning for cognitive reasoning scaffolding and reinforcement learning to optimize perceptioncognition synergy. To benchmark performance, we introduce KVG-Bench, a comprehensive dataset spanning 10 domains with 1.3K manually curated test cases.
Experimental results demonstrate that DeepPerception significantly outperforms direct fine-tuning, achieving +8.08% accuracy improvements on KVG-Bench and exhibiting +4.60% superior cross-domain generalization over baseline approaches. Our findings highlight the importance of integrating cognitive processes into MLLMs for human-like visual perception and open new directions for multimodal reasoning research.
#### Key Contributions
- We introduce the task of **Knowledge-intensive Visual Grounding (KVG)** to explore the concept of cognitive visual perception for MLLMs, aiming to integrate their inherent knowledge and reasoning capabilities into visual perception.
- We propose **[DeepPerception](https://huggingface.co/MaxyLee/DeepPerception)**, an MLLM with enhanced cognitive visual perception capabilities. To achieve this, we develop an automated dataset creation pipeline and a two-stage framework integrating supervised cognitive capability enhancement with perception-oriented reinforcement learning.
- We introduce **[KVG-Bench](https://huggingface.co/datasets/MaxyLee/KVG-Bench)**, a manually curated benchmark for the KVG task involving diverse knowledge domains and entities. Experiments on KVG-Bench and other fine-grained visual recognition tasks demonstrate DeepPerception's exceptional cognitive visual perception capabilities and superior cross-domain generalization performance.
## Get Started
### Contents:
- [Environment](#environment)
- [Data Preparation](#data-preparation)
- [Checkpoints](#checkpoints)
- [Evaluation](#evaluation)
- [Training](#training)
### Environment
1. Clone this repository and navigate to DeepPerception folder
```bash
git clone https://github.com/MaxyLee/DeepPerception.git
cd DeepPerception
```
2. Install Packages
For evaluation:
```bash
conda env create -n deepperception python=3.9
conda activate deepperception
pip install -r requirements.txt
```
### Data Preparation
| Dataset | Links |
|--------- |---------------------------------------|
| KVG-Bench | [`🤗HuggingFace`](https://huggingface.co/datasets/MaxyLee/KVG-Bench) |
| KVG Training | [`🤗HuggingFace`](https://huggingface.co/datasets/MaxyLee/KVG) |
---
### Checkpoints
| Model | Links |
|--------- |---------------------------------------|
| DeepPerception | [`🤗HuggingFace`](https://huggingface.co/MaxyLee/DeepPerception) |
| DeepPerception-FGVR | [`🤗HuggingFace`](https://huggingface.co/MaxyLee/DeepPerception-FGVR) |
---
### Evaluation
```bash
# Evaluate on KVG-Bench
bash eval.sh [CUDA_IDS] [KVG_BENCH_PATH] [CKPT_PATH]
```
Notice: Please modify the script if you want to evaluate on Qwen2-VL.
### Training
TODO
## Citation
If you find DeepPerception useful for your research or applications, please cite using this BibTeX:
```bibtex
@misc{ma2025deepperception,
title={DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding},
author={Xinyu Ma and Ziyang Ding and Zhicong Luo and Chi Chen and Zonghao Guo and Derek F. Wong and Xiaoyi Feng and Maosong Sun},
year={2025},
url={https://arxiv.org/abs/2503.12797},
}
```
## Acknowledgement
- [Qwen2-VL](https://github.com/QwenLM/Qwen2.5-VL)
- [vLLM](https://github.com/vllm-project/vllm)
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- [R1-V](https://github.com/Deep-Agent/R1-V)
## License
[](https://github.com/twbs/bootstrap/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) |
eden-art/perplexity | eden-art | "2025-02-12T01:24:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-12T01:24:13Z" | ## Exporting LoRas for use in other tools
Documentation on how to use Eden concepts in Automatic1111 or ComfyUI is here:
https://docs.eden.art/docs/guides/concepts/#exporting-loras-for-use-in-other-tools
exity](https://d14i3advvh2bvd.cloudfront.net/ae30dcbf9f53a436aa26f04b9acece5f60461876e0335df3e502698ed7f6cba6.tar)
LoRA trained on [Eden.art](https://eden.art) by [syntonikka](https://app.eden.art/creators/syntonikka) on 14 images.
* [How to train Concepts (LoRAs) on Eden](https://docs.eden.art/docs/guides/concepts)
* [How to export LoRAs from Eden](https://docs.eden.art/docs/guides/concepts#exporting-loras-for-use-in-other-tools)
 |
TheRamsay/common-voice-czech-bpe | TheRamsay | "2025-03-08T03:33:48Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-08T02:50:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
madnz8/ai-cabaca-02 | madnz8 | "2025-03-18T14:12:52Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-18T14:07:00Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AICABACA02
---
# Ai Cabaca 02
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AICABACA02` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('madnz8/ai-cabaca-02', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ardaspear/370cd2ba-2e2e-49eb-83bd-8453f2804288 | ardaspear | "2025-01-13T12:19:08Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | "2025-01-13T11:43:27Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 370cd2ba-2e2e-49eb-83bd-8453f2804288
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ce9dadc7454c010e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ce9dadc7454c010e_train_data.json
type:
field_instruction: query
field_output: pos
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/370cd2ba-2e2e-49eb-83bd-8453f2804288
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/ce9dadc7454c010e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e1903090-cc56-4c42-a6d4-4dba42aec63c
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: e1903090-cc56-4c42-a6d4-4dba42aec63c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 370cd2ba-2e2e-49eb-83bd-8453f2804288
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.0069 |
| 2.0372 | 0.0056 | 9 | 1.9927 |
| 1.9847 | 0.0111 | 18 | 1.9427 |
| 1.9672 | 0.0167 | 27 | 1.9087 |
| 1.8627 | 0.0222 | 36 | 1.8849 |
| 1.9091 | 0.0278 | 45 | 1.8673 |
| 1.9301 | 0.0333 | 54 | 1.8561 |
| 1.9597 | 0.0389 | 63 | 1.8482 |
| 1.8888 | 0.0445 | 72 | 1.8431 |
| 1.8095 | 0.0500 | 81 | 1.8402 |
| 1.8438 | 0.0556 | 90 | 1.8390 |
| 1.8484 | 0.0611 | 99 | 1.8388 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RayneAmes/hagrid_v2 | RayneAmes | "2025-02-09T20:54:00Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-09T20:52:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingtweets/drumbunkerdrag1 | huggingtweets | "2021-05-22T02:21:16Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1338105955188355074/gOecZ4jM_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Night the aeromorph 🤖 AI Bot </div>
<div style="font-size: 15px">@drumbunkerdrag1 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@drumbunkerdrag1's tweets](https://twitter.com/drumbunkerdrag1).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 148 |
| Retweets | 8 |
| Short tweets | 27 |
| Tweets kept | 113 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sfm2bdn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drumbunkerdrag1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2trykl4g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2trykl4g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drumbunkerdrag1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
zsxkib/squish-pika-lora | zsxkib | "2025-03-24T23:31:13Z" | 0 | 0 | null | [
"image-to-video",
"lora",
"replicate",
"text-to-video",
"video",
"video-generation",
"en",
"zh",
"base_model:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"base_model:adapter:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"license:apache-2.0",
"region:us"
] | text-to-video | "2025-03-24T16:21:27Z" | ---
license: apache-2.0
language:
- en
- zh
tags:
- image-to-video
- lora
- replicate
- text-to-video
- video
- video-generation
base_model: "Wan-AI/Wan2.1-T2V-14B-Diffusers"
pipeline_tag: text-to-video
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SQUISH-IT
---
# Squish Pika Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the Wan 14B Text-to-Video model.
It can be used with diffusers or ComfyUI, and can be loaded against the Wan 14B models.
It was trained on [Replicate](https://replicate.com/) with 10 steps at a learning rate of 2e-05 and LoRA rank of 32.
## Trigger word
You should use `SQUISH-IT` to trigger the video generation.
## Use this LoRA
Replicate has a collection of Wan models that are optimised for speed and cost. They can also be used with this LoRA:
- https://replicate.com/collections/wan-video
- https://replicate.com/fofr/wan-with-lora
### Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SQUISH-IT",
"lora_url": "https://huggingface.co/zsxkib/squish-pika-lora/resolve/main/wan-14b-t2v-squish-it-lora.safetensors"
}
output = replicate.run(
"fofr/wan-with-lora:latest",
model="14B",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.mp4", "wb") as file:
file.write(item.read())
```
### Using with Diffusers
```py
import torch
from diffusers.utils import export_to_video
from diffusers import WanVidAdapter, WanVid
# Load base model
base_model = WanVid.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", torch_dtype=torch.float16)
# Load and apply LoRA adapter
adapter = WanVidAdapter.from_pretrained("zsxkib/squish-pika-lora")
base_model.load_adapter(adapter)
# Generate video
prompt = "SQUISH-IT"
negative_prompt = "blurry, low quality, low resolution"
# Generate video frames
frames = base_model(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=30,
guidance_scale=5.0,
width=832,
height=480,
fps=16,
num_frames=32,
).frames[0]
# Save as video
video_path = "output.mp4"
export_to_video(frames, video_path, fps=16)
print(f"Video saved to: {video_path}")
```
## Training details
- Steps: 10
- Learning rate: 2e-05
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/zsxkib/squish-pika-lora/discussions) to add videos that show off what you've made with this LoRA.
|
h-asterix/mistralai-Finetune-test-epoch-1.0-test-multigpu-length-optimised-1 | h-asterix | "2024-01-12T09:53:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"region:us"
] | null | "2024-01-12T09:49:00Z" | ---
library_name: peft
base_model: mistralai/Mixtral-8x7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
gotutiyan/token-ged-electra-large-bin | gotutiyan | "2024-02-03T15:07:30Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"token-classification",
"grammatical error correction",
"grammatical error detection",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-02-03T14:57:29Z" | ---
language: en
license: cc-by-nc-sa-4.0
tags:
- grammatical error correction
- grammatical error detection
---
Binary and multi-class grammatical error detection models.
The experiment was performed according to [Yuan+ 21](https://aclanthology.org/2021.emnlp-main.687/).
The code and the performance on GEC benchmarks are avaliable from https://github.com/gotutiyan/ged_baselines.
Trained models are distributed for research and educational purposes only. |
initial01/segformer-b0-scene-parse-150 | initial01 | "2024-11-21T08:58:49Z" | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-11-21T08:50:07Z" | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6864
- Mean Iou: 0.0593
- Mean Accuracy: 0.1052
- Overall Accuracy: 0.4359
- Per Category Iou: [0.04561118994526657, 0.371474721092176, 0.2249066594257009, 0.12191163451994527, 0.5429954768427088, 0.4549738032652039, 0.5223870813967346, 0.0, 0.3278225074735203, 0.01314822376480196, 0.0, 0.002639544391685435, nan, 0.0, 0.0, 0.0, 0.06067888781685817, nan, 0.4152131860401912, 0.0, 0.037789830270479, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0]
- Per Category Accuracy: [0.09566436459326845, 0.8307894804263168, 0.22745590729031506, 0.3465776930409914, 0.7988512056984481, 0.8809161098223869, 0.6830946407143637, nan, 0.5583801487972583, 0.014884354198883884, nan, 0.002639544391685435, nan, nan, nan, 0.0, 0.10095143590940929, nan, 0.5724235211143437, 0.0, 0.044599577412271614, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.8465 | 1.0 | 20 | 3.9259 | 0.0349 | 0.0764 | 0.3543 | [0.029838917263896602, 0.32960270575198447, 0.2666900759338103, 0.10373400091675188, 0.35948634452610095, 0.29683504902505725, 0.3106824610495742, 0.0, 0.17841807853644284, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.034701852106784704, nan, 0.0, 0.0, 0.006080665024630542, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0026860503602617308, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0001249375312343828, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.05061880192592763, 0.8401503894240623, 0.29199706163024597, 0.3365490943755958, 0.517929492415012, 0.9244711421943704, 0.478408577350434, nan, 0.1953560505823916, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.09563973850105066, nan, 0.0, 0.0, 0.009818950159506153, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.004069823512392709, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.00012500520855035626, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.7938 | 2.0 | 40 | 3.6606 | 0.0356 | 0.0745 | 0.3728 | [0.016653424439853806, 0.3382388194160414, 0.07253498121631187, 0.1883868329584952, 0.3644220278957351, 0.3285098144926116, 0.23512605355179717, 0.0, 0.33592979273521256, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.037041044506960975, nan, 0.0030993857241546708, 0.0, 0.0011113203993219061, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.024085586569067948, 0.9150248514039763, 0.07317089915487543, 0.3513441372735939, 0.5814104194839798, 0.9282622996226154, 0.29497109196029236, nan, 0.3896113051211418, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.08991944898435676, nan, 0.0036586440976813894, 0.0, 0.0012221899987570949, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.2407 | 3.0 | 60 | 3.6695 | 0.0449 | 0.0898 | 0.3901 | [0.03845086557139096, 0.3432699814177488, 0.3265801006550026, 0.17270976385865014, 0.40903362025299184, 0.2876716919794221, 0.3418471142323663, 0.0, 0.33254897096050146, 0.0011545266440759288, nan, 0.0, nan, 0.0, nan, 0.0, 0.055739253660840815, nan, 5.49098563192093e-05, 0.0, 0.026526517140499587, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.06209852567711984, 0.7998405154744824, 0.3310059797197564, 0.4718779790276454, 0.642237604937941, 0.8915451995983796, 0.57347784755879, nan, 0.4651758020085461, 0.0012942916694681638, nan, 0.0, nan, nan, nan, 0.0, 0.10331543310763483, nan, 6.612007405448295e-05, 0.0, 0.058602974686166466, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.3782 | 4.0 | 80 | 3.6921 | 0.0454 | 0.0894 | 0.4016 | [0.04141596683320931, 0.3474656859913958, 0.34969660932199953, 0.12268708712656522, 0.4423430378801086, 0.3142568152773656, 0.31574425219246266, 0.0, 0.3563042576214923, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.09071967469292423, nan, 0.0014806537225869536, 0.0, 0.02470010629144101, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.06715466026222952, 0.8284724713055954, 0.3553024529895166, 0.42703527168732125, 0.7134333953700108, 0.8982100197347921, 0.41519269429163613, nan, 0.474947400386092, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.1575706280644408, nan, 0.0023362426165917305, 0.0, 0.04043584538260762, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.401 | 5.0 | 100 | 3.5008 | 0.0482 | 0.0943 | 0.4061 | [0.029615433150356488, 0.3394024169650486, 0.47332643712132405, 0.10603584201822866, 0.4704126685311465, 0.28172732002519235, 0.40326244363615227, 0.0, 0.3276541079474643, 0.0008033528613761586, nan, 0.0, nan, 0.0, nan, 0.0, 0.0835430504558769, nan, 0.0, 0.0, 0.03993045922802345, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.043149512197924685, 0.7607315792170527, 0.48377374552894087, 0.4205529075309819, 0.7440893328156172, 0.8905238375515009, 0.6487712135143138, nan, 0.4633321042014619, 0.0008908760841793854, nan, 0.0, nan, nan, nan, 0.0, 0.11819985991127714, nan, 0.0, 0.0, 0.044723867920619796, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.9146 | 6.0 | 120 | 3.2784 | 0.0507 | 0.0918 | 0.4184 | [0.00974911913529552, 0.3540400772259257, 0.545332779705059, 0.18816339184221792, 0.4270442851892686, 0.30430787824971633, 0.3394510205102829, 0.0, 0.33964546888442787, 8.364319334504338e-05, nan, 0.0, nan, 0.0, nan, 0.0, 0.07423940663415975, nan, 0.00035815452739882135, 0.0, 0.0039012218547636495, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.012686301322638843, 0.8533046602787456, 0.5590317110511537, 0.48602478551000955, 0.7467885977479347, 0.8542914517190042, 0.47572815533980584, nan, 0.4110253128863631, 9.24494049620117e-05, nan, 0.0, nan, nan, nan, 0.0, 0.09464744338080784, nan, 0.00048488054306620825, 0.0, 0.004080871690765215, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.0806 | 7.0 | 140 | 3.3092 | 0.0499 | 0.0933 | 0.4114 | [0.02477965672837684, 0.35822548054894127, 0.5118927006662596, 0.10369005527613166, 0.42781918677224806, 0.32367639327856706, 0.4671336935348993, 0.0, 0.2693485744775821, 0.005371252593454735, nan, 0.0, nan, 0.0, nan, 0.0, 0.04160471919811999, nan, 0.006967241498718748, 0.0, 0.051957359132849594, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.00016601092351876753, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.033761189570573294, 0.7636394496823119, 0.5237781393528721, 0.3969876072449952, 0.8278361696145247, 0.8846553335872313, 0.7268930480450062, nan, 0.2846832093356181, 0.006244537080615881, nan, 0.0, nan, nan, nan, 0.0, 0.05063623628297922, nan, 0.011086132416468306, 0.0, 0.06007374570161992, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.00019288263091908573, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 3.2397 | 8.0 | 160 | 3.1418 | 0.0498 | 0.0899 | 0.4173 | [0.006435500912496398, 0.35392414248698456, 0.5701394804088586, 0.12431887785540986, 0.4203228186655299, 0.3001067655988119, 0.4157648928446891, 0.0, 0.28156198903529867, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.048256341202011596, nan, 0.014121736611507405, 0.0, 0.0050832532821005446, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.007699114027326109, 0.8278463824554212, 0.588195717394737, 0.35755958055290754, 0.7947487514560767, 0.8710140913340026, 0.5667144571366236, nan, 0.3174847623798885, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.05124912444548214, nan, 0.017191219254165564, 0.0, 0.005261631520072917, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.9765 | 9.0 | 180 | 3.2750 | 0.0501 | 0.0965 | 0.4170 | [0.014365576257406988, 0.35769697714917664, 0.42094108246829315, 0.11937454906204906, 0.47282021286064874, 0.2937464820443544, 0.46301276370340944, 0.0, 0.35485359789107784, 0.0031046039865937555, nan, 0.0, nan, 0.0, nan, 0.0, 0.08303034800029681, nan, 0.006607377716540365, 0.0, 0.06646321296491176, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.01969594245199545, 0.7732389449682312, 0.4267844760708229, 0.40377502383222114, 0.796583073359487, 0.9034206972959873, 0.7083404759307453, nan, 0.46278441749994575, 0.003697976198480468, nan, 0.0, nan, nan, nan, 0.0, 0.13063273406490777, nan, 0.009300890416997267, 0.0, 0.09158138956788334, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.7641 | 10.0 | 200 | 3.1352 | 0.0485 | 0.0911 | 0.4196 | [0.013668833774781283, 0.3650855340290058, 0.3854929233772572, 0.10671665818472628, 0.46724762937926045, 0.2848988573109225, 0.37933238079735704, 0.0, 0.3653871004215865, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.04590614969175116, nan, 0.001372704077826353, 0.0, 0.05752103122993596, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.019569539087367707, 0.8122614137118261, 0.39043931373962476, 0.39180171591992374, 0.8373558986168945, 0.9055153550531454, 0.49027567828702334, nan, 0.4906567902305707, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.05259164137286949, nan, 0.0020276822710041434, 0.0, 0.07237850602808965, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.3333 | 11.0 | 220 | 3.0659 | 0.0490 | 0.0908 | 0.4128 | [0.039110258906553466, 0.3532966430621289, 0.2806423658127065, 0.11260427938147928, 0.49353480705306213, 0.32321534874491514, 0.3988072587166437, 0.0, 0.35021707043643496, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.07298330395516033, nan, 0.06704530268608846, 0.0, 0.006844572807659591, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09094147524217736, 0.8411415633326501, 0.28458248374628414, 0.3690371782650143, 0.7729082705156185, 0.9023127791434408, 0.5226277486013495, nan, 0.4387512743205431, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.10754727994396451, nan, 0.11057480384378031, 0.0, 0.007167419314744997, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.3064 | 12.0 | 240 | 3.0843 | 0.0559 | 0.0993 | 0.4282 | [0.049852519629429604, 0.36806590445761156, 0.48440113299163745, 0.09999368487527628, 0.4667211769221887, 0.3323045788009107, 0.4877944325481799, 0.0, 0.3462282709060982, 9.627276295424082e-05, nan, 0.0, nan, 0.0, nan, 0.0, 0.09900185176821281, nan, 0.08684866887165862, 0.0, 0.031139728618720216, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09652620571573033, 0.7909663865546218, 0.49311748673271133, 0.3018875119161106, 0.8009479561369448, 0.9096008032406606, 0.7011251538905079, nan, 0.4676593714075006, 0.00010925838768237747, nan, 0.0, nan, nan, nan, 0.0, 0.12794770021013308, nan, 0.14121043815569073, 0.0, 0.03546422504868045, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.242 | 13.0 | 260 | 3.0068 | 0.0514 | 0.0943 | 0.4225 | [0.043821966603945516, 0.3676676475243072, 0.35758924633413175, 0.13307805433003136, 0.45094627128107895, 0.3184548423511799, 0.4805941244422297, 0.0, 0.35058391997354216, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.049894171699963866, nan, 0.05913021681915027, 0.0, 0.008521629481990685, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.0851039380393689, 0.822097894035663, 0.3617970739878758, 0.3657578646329838, 0.7980264303024622, 0.9132188484575702, 0.6613084199535602, nan, 0.44261761707481073, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.05641489610086388, nan, 0.10398483646301684, 0.0, 0.008679620499647843, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.4269 | 14.0 | 280 | 2.9189 | 0.0586 | 0.1007 | 0.4365 | [0.03269621548945012, 0.3575789266292028, 0.574882153985459, 0.14054319806532195, 0.48986045451574683, 0.3892210407541197, 0.43681909262866025, 0.0, 0.34389494088161093, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.045454545454545456, nan, 0.17042934093789608, 0.0, 0.009437419827744181, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.04918240005515783, 0.8398717718794835, 0.5927886364728578, 0.4033174451858913, 0.7827627297923333, 0.8598829761451373, 0.6064065202823793, nan, 0.45312669457519034, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.052387345318701845, nan, 0.2845146786564401, 0.0, 0.01066826863321871, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.9238 | 15.0 | 300 | 3.0128 | 0.0538 | 0.1030 | 0.4263 | [0.046991869918699185, 0.371658445939626, 0.2467404523620255, 0.12363770250368189, 0.48116788321167886, 0.3654494323217239, 0.5142577659016081, 0.0, 0.34284942654975603, 0.0005256794406770752, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.08771527936858327, nan, 0.15367814950858733, 0.0, 0.061618272747630064, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.086344989255714, 0.7634985396597663, 0.24932204669810998, 0.38413727359389893, 0.8340728640861194, 0.9065886507634249, 0.8397590736960214, nan, 0.5162950350302583, 0.0005883143952128018, nan, 0.0, nan, nan, nan, 0.0, 0.12292785430772822, nan, 0.2729436656969056, 0.0, 0.06840120976094792, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8486 | 16.0 | 320 | 2.8972 | 0.0534 | 0.0935 | 0.4264 | [0.0456067972434286, 0.3589537334052138, 0.5114273855587838, 0.1469637419683698, 0.4719265131496162, 0.36958768554150634, 0.33637215700289275, 0.0, 0.3528710122520905, 0.0012176535574030917, nan, 0.0, nan, 0.0, nan, 0.0, 0.018390628197258032, nan, 0.11161991217586313, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.07901359410730496, 0.8774563178930108, 0.5244852703918055, 0.41776930409914204, 0.7532234525419417, 0.8728490807741578, 0.38870015116333434, nan, 0.46040930091317267, 0.0012690781953876152, nan, 0.0, nan, nan, nan, 0.0, 0.020984123278076115, nan, 0.1831966851802874, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.1588 | 17.0 | 340 | 2.9381 | 0.0563 | 0.1059 | 0.4354 | [0.06150037580350995, 0.37198682604276545, 0.35689057913539823, 0.11845641191013652, 0.5179268719591287, 0.30885381324679156, 0.5519206537095728, 0.0, 0.33677246622808477, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.09751916095199677, nan, 0.1352296154650572, 0.0, 0.061597889579800266, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.008091179451394773, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.1344587063190191, 0.7753253740520598, 0.36143321044357024, 0.37818875119161105, 0.8201159505670331, 0.9061212477928193, 0.9010113917935452, nan, 0.4938344576274863, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.14111020312864814, nan, 0.19950630344705986, 0.0, 0.06771761196503294, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.009393384125759475, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.3713 | 18.0 | 360 | 2.8494 | 0.0625 | 0.1061 | 0.4458 | [0.038442769266331916, 0.37039107849738784, 0.5801788066627883, 0.11682828067558208, 0.5176490076728686, 0.4042814515049641, 0.5153163519709887, 0.0, 0.33220338983050846, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.04946490650064757, nan, 0.2900430894685632, 0.0, 0.0361204257180347, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.06879790400239018, 0.8358750512400082, 0.5956652180778393, 0.3344518589132507, 0.7865947219730341, 0.8889138939860818, 0.6909177328616621, nan, 0.4793397392794395, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.06353607284613588, nan, 0.41391166358106324, 0.0, 0.041057297924348514, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.7969 | 19.0 | 380 | 2.8787 | 0.0617 | 0.1035 | 0.4410 | [0.041215021976131264, 0.36528584624287747, 0.49263743892345074, 0.09753978004854515, 0.5043575438658242, 0.4502682945796718, 0.5575348895783484, 0.0, 0.31985462703649103, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.04205067708717344, nan, 0.3134015897419123, 0.0, 0.025520509633312618, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09083805430748193, 0.8409285970485755, 0.5066834181204045, 0.24823641563393709, 0.7674186940163616, 0.9020877332687047, 0.7589178575324533, nan, 0.4576762900462009, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.04993579266869017, nan, 0.42233095301066736, 0.0, 0.02721962132825123, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.7974 | 20.0 | 400 | 2.8235 | 0.0566 | 0.1031 | 0.4409 | [0.046775168218143794, 0.3822492104746668, 0.40051523677163486, 0.1353817380268519, 0.4758927072042502, 0.3573761511862782, 0.5532004989871476, 0.0, 0.3447844703865496, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.043349662630557353, nan, 0.1814648273759622, 0.0, 0.02282693332240482, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.07916297990186502, 0.8233116417298627, 0.40559114095249865, 0.3864251668255481, 0.8347476803191988, 0.9035245646227885, 0.753276504231015, nan, 0.4755005097282173, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.054751342516927384, nan, 0.31486379264744774, 0.0, 0.023076604383311928, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.5353 | 21.0 | 420 | 2.8248 | 0.0598 | 0.1069 | 0.4397 | [0.03263740898597052, 0.3859294236858014, 0.29010016999776495, 0.1265944516422017, 0.5166483150867006, 0.397175016887661, 0.5768542981413755, 0.0, 0.3140218044467336, 0.0004184281530004184, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.08515519568151148, nan, 0.3172241367659977, 0.0, 0.06488679965447278, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.06757983521597738, 0.8269737010657922, 0.29406353194790574, 0.36141086749285034, 0.8129420113272725, 0.9058788906969497, 0.7791924449500538, nan, 0.4959059063401514, 0.00048746049889060716, nan, 0.0, nan, nan, nan, 0.0, 0.11049498015409759, nan, 0.5173454994269594, 0.0, 0.06690972366076978, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.3352 | 22.0 | 440 | 2.8247 | 0.0609 | 0.1081 | 0.4426 | [0.03972907974220915, 0.37305793882380267, 0.3281000999378765, 0.1336317584160083, 0.5295452594531954, 0.40456361773836613, 0.564229870596693, 0.0, 0.3361810183088568, 0.004221714512764478, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.07514875049583498, nan, 0.30264123257520176, 0.0, 0.07347916511921562, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.07232570699700079, 0.8229577654232425, 0.3335804859294654, 0.3631267874165872, 0.7929331744480298, 0.8956133365647613, 0.782776730196824, nan, 0.5635587705789211, 0.005000672359308815, nan, 0.0, nan, nan, nan, 0.0, 0.11058253560588373, nan, 0.47275852948955305, 0.0, 0.08196959025562414, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.1095 | 23.0 | 460 | 2.8141 | 0.0571 | 0.1074 | 0.4353 | [0.06535635985999462, 0.3817156635695309, 0.28877964885196394, 0.14126360181183945, 0.5146051126191038, 0.4122551267771736, 0.559829206785878, 0.0, 0.3282998895270978, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.08083256540726792, nan, 0.3003043489868382, 0.0, 0.010526100223218857, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.19525872470496305, 0.8142037302725968, 0.29280030756767517, 0.4138417540514776, 0.7889298003668643, 0.8714122494200741, 0.761107388302763, nan, 0.5011712903715594, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.11776208265234649, nan, 0.49583443533456756, 0.0, 0.010647553548494013, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8194 | 24.0 | 480 | 2.8148 | 0.0594 | 0.1084 | 0.4377 | [0.041082739349862965, 0.36736038148174893, 0.27300663217754245, 0.1271069182389937, 0.5250753727372814, 0.43698799236898217, 0.5989327652663605, 0.0, 0.32714173855430556, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.06031417837060945, nan, 0.34822183122982503, 0.0, 0.035373090767584395, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.005033866995073892, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.08802270664077314, 0.7983737702398033, 0.2775180387068427, 0.38531935176358434, 0.8007605071833117, 0.8763286362219991, 0.8850613224454176, nan, 0.5301444591458256, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.06981088022414196, nan, 0.5539760204531429, 0.0, 0.03766002402949828, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.006307262031054104, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 2.1871 | 25.0 | 500 | 2.7767 | 0.0573 | 0.1013 | 0.4311 | [0.04990700856677556, 0.37546962118359956, 0.27628096369189004, 0.09842969904889409, 0.5063786332037878, 0.41967626387127144, 0.5097981720061141, 0.0, 0.3241968203047601, 0.006431231810847897, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.04471002641841741, nan, 0.33753705126797673, 0.0, 0.029831818004448434, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 1.6415778846627377e-05, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.11902600461946841, 0.838387412891986, 0.2794883941260066, 0.26913250714966636, 0.7927296584412281, 0.91067409895094, 0.6367015225420374, nan, 0.48443702145196627, 0.007169031130235998, nan, 0.0, nan, nan, nan, 0.0, 0.05186201260798506, nan, 0.5421184871727056, 0.0, 0.031673364544060986, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 1.9288263091908572e-05, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.6934 | 26.0 | 520 | 2.7945 | 0.0589 | 0.1016 | 0.4320 | [0.04296733447222654, 0.37567309000582, 0.24595370633727465, 0.10864297819297189, 0.5341298936728962, 0.43437502599379474, 0.4777587397726065, 0.0, 0.2911183392208568, 0.0029261367751655006, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0778114989040634, nan, 0.40438270266664444, 0.0, 0.06508605950074091, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0006730833950326445, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.08583937579720304, 0.8444433413609346, 0.24934264274778764, 0.29578646329837943, 0.8142514761605099, 0.9040092788145275, 0.5605432529726192, nan, 0.5076621911806173, 0.003395414509513884, nan, 0.0, nan, nan, nan, 0.0, 0.10775157599813215, nan, 0.5344265185577007, 0.0, 0.0709698802668103, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0007715305236763429, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.3449 | 27.0 | 540 | 2.7991 | 0.0607 | 0.1090 | 0.4378 | [0.04433114699884253, 0.37750723375828926, 0.2321303493345846, 0.11464495079445078, 0.55761093651557, 0.4087793808742583, 0.6078223037433873, 0.0, 0.2881313693987405, 0.007722619490486229, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.09168825232363248, nan, 0.3618692603224931, 0.0, 0.06129256160104886, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0008469193309337286, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09242384197281178, 0.8053504176060668, 0.23434871858244255, 0.3296091515729266, 0.8010657811935142, 0.9230689332825538, 0.8469198522651125, nan, 0.53292085113767, 0.00942143481476501, nan, 0.0, nan, nan, nan, 0.0, 0.14049731496614523, nan, 0.5584281054394781, 0.0, 0.06585325433981025, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0010029896807792458, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8113 | 28.0 | 560 | 2.7602 | 0.0574 | 0.1042 | 0.4406 | [0.04459461767502235, 0.3681915211378399, 0.3371584588663842, 0.10612108186563207, 0.5284799908152209, 0.40843786689736145, 0.5541377538633527, 0.0, 0.3261966764208435, 0.00822350839149092, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.013223228273280051, nan, 0.34760924406298027, 0.0, 0.0001395701240180245, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09001068682991853, 0.8349191048370568, 0.34250544078978984, 0.28995233555767397, 0.7888869548917482, 0.9153827510992626, 0.7647384250962302, nan, 0.5271023577641368, 0.009051637194916964, nan, 0.0, nan, nan, nan, 0.0, 0.014534204996497782, nan, 0.5284316318434277, 0.0, 0.00014500559307287567, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4011 | 29.0 | 580 | 2.7772 | 0.0591 | 0.1065 | 0.4386 | [0.03563412454066938, 0.37931491532702943, 0.31340535728736746, 0.12195836624455372, 0.5267934993111278, 0.43794202823824624, 0.5505770417729332, 0.0, 0.30867212168122565, 0.008710917535165886, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.06941181588793179, nan, 0.38526624360832123, 0.0, 0.05141927797424308, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0018001210990557547, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.08246095859715248, 0.8202580574912892, 0.3190534055568142, 0.3650333651096282, 0.7976167204466641, 0.8822144514074023, 0.7367654163225233, nan, 0.5449482680085894, 0.00988368183957507, nan, 0.0, nan, nan, nan, 0.0, 0.09312981554984824, nan, 0.5098078109847483, 0.0, 0.05632431536644985, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.002121708940109943, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.7147 | 30.0 | 600 | 2.7422 | 0.0607 | 0.1061 | 0.4457 | [0.03631522190962161, 0.3788654394125197, 0.3835965633977584, 0.13012668131074534, 0.528745751238622, 0.41257300441394396, 0.5906988800716377, 0.0, 0.32064449865194505, 0.006658563248706129, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.036276708480173496, nan, 0.32776635358194856, 0.0, 0.004358712788648799, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.06926904381600267, 0.8484208469973356, 0.3905148325884429, 0.3297998093422307, 0.7864688633898804, 0.9012741058754284, 0.7812728887780704, nan, 0.5146357070038826, 0.00732031197471929, nan, 0.0, nan, nan, nan, 0.0, 0.04296054167639505, nan, 0.5199241823150842, 0.0, 0.004868044910303683, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.3063 | 31.0 | 620 | 2.7685 | 0.0585 | 0.1054 | 0.4347 | [0.04430754040697525, 0.37835434033161264, 0.32605888517053716, 0.08752589812751652, 0.5608869930339113, 0.4092893177481285, 0.5442681632033975, 0.0, 0.30062308263943144, 0.007213476779578524, 0.0, 0.025663831069158934, nan, 0.0, 0.0, 0.0, 0.06638374219369976, nan, 0.3207859642132264, 0.0, 0.031660913542586495, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 1.5779839676828884e-05, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09900830814842053, 0.8303491366058618, 0.33110895996814477, 0.25612964728312676, 0.7647729859279393, 0.9171138732126164, 0.6990057504402437, nan, 0.5133180053358783, 0.0083456599206616, nan, 0.025663831069158934, nan, nan, nan, 0.0, 0.11230445949101098, nan, 0.5677951159305299, 0.0, 0.040850147077101544, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 1.9288263091908572e-05, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2689 | 32.0 | 640 | 2.7281 | 0.0609 | 0.1054 | 0.4412 | [0.03443583777574883, 0.3807716584562478, 0.3530706329710267, 0.09404662963124973, 0.5298071063819173, 0.4340780287474333, 0.542041212299833, 0.0, 0.31053222051451657, 0.0066096337069213255, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.07486361728913135, nan, 0.3604811427139456, 0.0, 0.04010514594476187, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.005053296486379787, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.06717764269216184, 0.8403825707112114, 0.36016312071344714, 0.2459485224022879, 0.7962563766117263, 0.9148807256863899, 0.687863298477458, nan, 0.519326291130729, 0.007538828750084045, nan, 0.0, nan, nan, nan, 0.0, 0.1041326173243054, nan, 0.5706823591642423, 0.0, 0.04551104114015826, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0061722441894107435, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.0164 | 33.0 | 660 | 2.7252 | 0.0594 | 0.1018 | 0.4323 | [0.040223350645096016, 0.3724685833505317, 0.17590683415176755, 0.11167905824039653, 0.5424857058628709, 0.47403256285897805, 0.520238721244797, 0.0, 0.3185389426541748, 0.011215288681530662, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.05881674990395697, nan, 0.4364600997296889, 0.0, 0.027335284948484962, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0019017328721084644, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09594015375245625, 0.846736331727813, 0.1780528494634729, 0.30242135367016204, 0.7982915366797435, 0.8815220025620607, 0.6398338761707366, nan, 0.5368793787822919, 0.012606737040274322, nan, 0.0, nan, nan, nan, 0.0, 0.0893649311230446, nan, 0.5729524817067795, 0.0, 0.029788291834113603, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0022374385186613947, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8566 | 34.0 | 680 | 2.7209 | 0.0589 | 0.1061 | 0.4359 | [0.050815812580206295, 0.38404224108296486, 0.2190883908335432, 0.10899882148268702, 0.5584041428793315, 0.39645257711623955, 0.5469359828653022, 0.0, 0.2998007210551136, 0.017797729454304006, nan, 0.019423603838815647, nan, 0.0, 0.0, 0.0, 0.07518186248849257, nan, 0.34443384729507764, 0.0, 0.042670203359858536, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.12103696723854614, 0.8300416965566715, 0.22099561304141865, 0.29624404194470927, 0.7894412682260634, 0.9046497939964685, 0.716303822718135, nan, 0.5343470056178556, 0.020725475694210987, nan, 0.019423603838815647, nan, nan, nan, 0.0, 0.1215561522297455, nan, 0.5730847218548885, 0.0, 0.049985499440692714, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.3593 | 35.0 | 700 | 2.6890 | 0.0601 | 0.1078 | 0.4475 | [0.045562535449418365, 0.38311331525885417, 0.3188197583405054, 0.09094479027284892, 0.5295616496014443, 0.40050226399299876, 0.6025016704530268, 0.0, 0.32668354920399373, 0.01488634989723129, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.030872002862936895, nan, 0.34666849653468745, 0.0, 0.03366854532729215, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.003745318352059925, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09784769543683854, 0.8161604580856733, 0.3229872510452495, 0.23843660629170638, 0.8153279687228032, 0.9110549458158779, 0.8782511804765541, nan, 0.5491345465587922, 0.016556847979560276, nan, 0.0, nan, nan, nan, 0.0, 0.03776558487041793, nan, 0.5567310235387464, 0.0, 0.03786717487674525, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.004359147458771338, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2632 | 36.0 | 720 | 2.7158 | 0.0567 | 0.0997 | 0.4273 | [0.04562925185895229, 0.37543272943693806, 0.1397522928908468, 0.08608267517578942, 0.5400358152514795, 0.47007743610402397, 0.4560135494416763, 0.0, 0.3162368168439254, 0.014549221864372424, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0653509808555897, nan, 0.3726936218678816, 0.0, 0.043914358457358645, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.02220801558670868, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.11268285395814899, 0.8484736882557902, 0.14091130654473805, 0.22734032411820781, 0.7954369569001298, 0.9090122217221203, 0.5412660318845548, nan, 0.5394985141964731, 0.016287904256034425, nan, 0.0, nan, nan, nan, 0.0, 0.09683632967546113, nan, 0.5769637661994181, 0.0, 0.05323776774247007, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.02814157585109461, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.6449 | 37.0 | 740 | 2.7101 | 0.0608 | 0.1078 | 0.4393 | [0.05030487804878049, 0.38104462388631666, 0.2595306620943983, 0.11875132668223307, 0.537938403002357, 0.4092740009552619, 0.5600659051221507, 0.0, 0.32437881344524133, 0.014042806347625175, nan, 0.030010472105467013, nan, 0.0, 0.0, 0.0, 0.07250148720999405, nan, 0.3668374168156349, 0.0, 0.038216445852534565, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.11603828872826724, 0.8129627613240418, 0.26187190630170465, 0.3413155386081983, 0.8049031290586046, 0.8900218121386283, 0.7522168025058829, nan, 0.5618723293495, 0.01706111746117125, nan, 0.030010472105467013, nan, nan, nan, 0.0, 0.11382208732197058, nan, 0.5382174028034912, 0.0, 0.04397812487053072, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.5116 | 38.0 | 760 | 2.7348 | 0.0593 | 0.1048 | 0.4342 | [0.04476181393352725, 0.38006545810075054, 0.2633851585307896, 0.1154556897613755, 0.5489600924611613, 0.41667613774739815, 0.532446750475821, 0.0, 0.31090536501880345, 0.007462519829300444, nan, 0.0002151802493221822, nan, 0.0, 0.0, 0.0, 0.06115513325126142, nan, 0.3717560897946187, 0.0, 0.01837007348029392, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.012359550561797753, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.10786803488732864, 0.8328470870055339, 0.2663343837318669, 0.32877025738798854, 0.7860404086387189, 0.8885330471211439, 0.6844582274930262, nan, 0.5240656790230571, 0.008421300342903247, nan, 0.0002151802493221822, nan, nan, nan, 0.0, 0.10293602614989493, nan, 0.5660980340297981, 0.0, 0.02050793387744956, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.015700646156813578, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.9855 | 39.0 | 780 | 2.6872 | 0.0603 | 0.1051 | 0.4376 | [0.04404799646463191, 0.38109290900249837, 0.2857481461416043, 0.1120359580489429, 0.5376699530423197, 0.4582241447392038, 0.4894273702252389, 0.0, 0.3283760388917636, 0.010858571263437753, nan, 0.011849259062674833, nan, 0.0, 0.0, 0.0, 0.06959011992124575, nan, 0.375106892423465, 0.0, 0.03300981228668942, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.10537444123967227, 0.8370439639270342, 0.2896834387164542, 0.30795042897998093, 0.797506928916679, 0.8838936398573556, 0.6140971497140364, nan, 0.5574799904561525, 0.01263195051435487, nan, 0.011849259062674833, nan, nan, nan, 0.0, 0.11347186551482606, nan, 0.5800714096799788, 0.0, 0.03846791233376144, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.9454 | 40.0 | 800 | 2.6767 | 0.0580 | 0.1049 | 0.4400 | [0.041050497057223405, 0.37577931110309004, 0.27137538892134616, 0.10145295170450541, 0.534996475983237, 0.40006812246906104, 0.5257193391391942, 0.0, 0.3410182752415085, 0.011918419841136452, nan, 0.0022952559927699435, nan, 0.0, 0.0, 0.0, 0.05122512577143206, nan, 0.3287662252523928, 0.0, 0.029703700970443895, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.07630166737529159, 0.8318559130969461, 0.27425013215798544, 0.2745090562440419, 0.7907105654263794, 0.9149845930131911, 0.6994420982094157, nan, 0.6031765828687938, 0.01370772540845828, nan, 0.0022952559927699435, nan, nan, nan, 0.0, 0.07340065374737334, nan, 0.5526536189720532, 0.0, 0.0333512864067614, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8725 | 41.0 | 820 | 2.6843 | 0.0589 | 0.1047 | 0.4357 | [0.03591352667869818, 0.37525104931336195, 0.17356675984683084, 0.09792386126315503, 0.5425375053889656, 0.4553681289433037, 0.5076419738263898, 0.0, 0.3379781404683849, 0.01415432783249625, nan, 0.0060824283808403505, nan, 0.0, 0.0, 0.0, 0.08210853848265763, nan, 0.3835485624458254, 0.0, 0.052174206503214485, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0004169138752144701, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.07523298438343887, 0.8257359346177495, 0.17519686390816908, 0.27355576739752147, 0.8020405157524068, 0.9033341411903195, 0.658908507223114, nan, 0.6227740060299763, 0.01641397162643717, nan, 0.0060824283808403505, nan, nan, nan, 0.0, 0.13092458557086153, nan, 0.5753989244467954, 0.0, 0.06405104196876166, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0005014948403896229, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4625 | 42.0 | 840 | 2.7059 | 0.0571 | 0.1043 | 0.4341 | [0.04433983360377975, 0.3783805047053729, 0.20801116253743535, 0.10351118362732377, 0.5431463363487139, 0.3953751089882103, 0.5479126798497413, 0.0, 0.3159101593606386, 0.013781110748831649, nan, 0.006469752829620278, nan, 0.0, 0.0, 0.0, 0.058092412345768354, nan, 0.3232404021937843, 0.0, 0.03193191609080199, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.0888615653333027, 0.8240241980938717, 0.20981195806644284, 0.30722592945662536, 0.8045550095732859, 0.9027455596717793, 0.7228256634823669, nan, 0.5356538620046418, 0.01598534256706784, nan, 0.006469752829620278, nan, nan, nan, 0.0, 0.09371351856175578, nan, 0.5611610685003967, 0.0, 0.037618593860048885, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2952 | 43.0 | 860 | 2.6968 | 0.0563 | 0.1019 | 0.4317 | [0.03850150113948829, 0.37611308488925843, 0.22650954289207362, 0.10273593265396544, 0.5374168230677359, 0.44420316494113826, 0.48354693453411846, 0.0, 0.31814893779029924, 0.007812786560539926, nan, 0.0006168500480569223, nan, 0.0, 0.0, 0.0, 0.06330168571982174, nan, 0.3544762447583764, 0.0, 0.026240916257407442, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.001781266748636464, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.0770715787780242, 0.836553981348637, 0.22895255356689254, 0.28293612964728315, 0.8012933977800688, 0.8994564276564069, 0.6091414857641541, nan, 0.543771554996421, 0.008950783298594769, nan, 0.0006168500480569223, nan, nan, nan, 0.0, 0.0953478869950969, nan, 0.575707484792383, 0.0, 0.029995442681360566, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.002179573729385669, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4466 | 44.0 | 880 | 2.7059 | 0.0575 | 0.1022 | 0.4318 | [0.045210923340409374, 0.371373773150548, 0.2188520195198762, 0.12002228530496378, 0.5301379747435467, 0.4498121786389834, 0.47286280135982284, 0.0, 0.32518703733518206, 0.013297794225629277, 0.0, 0.0023526373925891924, nan, 0.0, 0.0, 0.0, 0.0513332016590954, nan, 0.41675779993567064, 0.0, 0.02982089113998385, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09373384047895383, 0.8374522827423653, 0.2213732072855093, 0.3450142993326978, 0.8086146183405412, 0.8768479728560052, 0.5906824167432873, nan, 0.5366841637203653, 0.0152205338532912, nan, 0.0023526373925891924, nan, nan, nan, 0.0, 0.075852206397385, nan, 0.5711451996826237, 0.0, 0.03366201267763185, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.8578 | 45.0 | 900 | 2.6998 | 0.0568 | 0.1006 | 0.4268 | [0.04628180193855447, 0.3708106418566931, 0.158281557296168, 0.11389459647022003, 0.5364874248424181, 0.45089564719862685, 0.44476203264977265, 0.0, 0.32263068093686986, 0.016118614210217264, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.06581860290862028, nan, 0.3967814553419323, 0.0, 0.033146426290013485, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.10090435861783667, 0.8434393574502972, 0.15968117315098965, 0.31916110581506196, 0.8006748162330793, 0.8867499913443895, 0.5419673050850099, nan, 0.5301173459427803, 0.018456263026961607, nan, 0.0, nan, nan, nan, 0.0, 0.11280060705113239, nan, 0.574936083928414, 0.0, 0.03869577826573311, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.6658 | 46.0 | 920 | 2.6797 | 0.0585 | 0.1023 | 0.4331 | [0.040923290142332314, 0.36969597321667125, 0.20602701013559413, 0.10237800036220275, 0.5418315391329648, 0.44460820016280367, 0.5166393728531987, 0.0, 0.32463165477048, 0.01625251747423291, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.04664321132440384, nan, 0.4056594933941344, 0.0, 0.02692875998206963, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.08550613056318444, 0.8352937973970076, 0.208212331541477, 0.2802287893231649, 0.7979005717193086, 0.8982446421770591, 0.680920693792953, nan, 0.53764939374878, 0.018447858535601425, nan, 0.0, nan, nan, nan, 0.0, 0.06616273639971983, nan, 0.5725116812130829, 0.0, 0.03359986742345776, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.2425 | 47.0 | 940 | 2.6742 | 0.0585 | 0.1044 | 0.4358 | [0.040611866981581896, 0.37639481430157035, 0.22798621355395274, 0.09035537259514408, 0.5401612620951082, 0.4287328110269497, 0.5270372350316103, 0.0, 0.33090850927703136, 0.013922062744238706, nan, 0.0020800757434477615, nan, 0.0, 0.0, 0.0, 0.0662686567164179, nan, 0.35688549201583186, 0.0, 0.041424926462223524, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.08417314962711007, 0.8237487830498053, 0.2302432393466933, 0.24606291706387035, 0.8017084633202566, 0.9132188484575702, 0.7067353394941482, nan, 0.5609287898835217, 0.016018960532508574, nan, 0.0020800757434477615, nan, nan, nan, 0.0, 0.10366565491477936, nan, 0.5763246054835581, 0.0, 0.05280275096325144, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 0.7362 | 48.0 | 960 | 2.6857 | 0.0588 | 0.1042 | 0.4353 | [0.03537314543740386, 0.3723326335348705, 0.2224170506599247, 0.09766928699489726, 0.5443247601005275, 0.433105077521621, 0.5251633025363385, 0.0, 0.3252722045029616, 0.013493540013677375, nan, 0.005121289933867936, nan, 0.0, 0.0, 0.0, 0.06534738573083679, nan, 0.38016253567237906, 0.0, 0.04021769512816718, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.0655343989520012, 0.8278175599508096, 0.22467544058382935, 0.27004766444232603, 0.8009613453479186, 0.9076792576948378, 0.6953591297978775, nan, 0.5613137973667657, 0.015422241645935588, nan, 0.005121289933867936, nan, nan, nan, 0.0, 0.10206047163203362, nan, 0.5783963678039319, 0.0, 0.050669097236607695, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4685 | 49.0 | 980 | 2.6771 | 0.0611 | 0.1080 | 0.4392 | [0.04595285196175404, 0.37860249790145756, 0.265269688700634, 0.10659299393809345, 0.5405468092604446, 0.44103987022965935, 0.5372866127583109, 0.0, 0.32756701287393497, 0.01467050945890307, nan, 0.008277266923926609, nan, 0.0, 0.0, 0.0, 0.07906354515050167, nan, 0.3837320998418727, 0.0, 0.04668650828593065, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.10250163749813268, 0.810322299651568, 0.2682910084512457, 0.303069590085796, 0.8050584439059006, 0.8942803725374788, 0.7455313313282115, nan, 0.5712155391189293, 0.017069521952531433, nan, 0.008277266923926609, nan, nan, nan, 0.0, 0.13798739201494278, nan, 0.5722912809662347, 0.0, 0.05491568960517049, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
| 1.4566 | 50.0 | 1000 | 2.6864 | 0.0593 | 0.1052 | 0.4359 | [0.04561118994526657, 0.371474721092176, 0.2249066594257009, 0.12191163451994527, 0.5429954768427088, 0.4549738032652039, 0.5223870813967346, 0.0, 0.3278225074735203, 0.01314822376480196, 0.0, 0.002639544391685435, nan, 0.0, 0.0, 0.0, 0.06067888781685817, nan, 0.4152131860401912, 0.0, 0.037789830270479, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] | [0.09566436459326845, 0.8307894804263168, 0.22745590729031506, 0.3465776930409914, 0.7988512056984481, 0.8809161098223869, 0.6830946407143637, nan, 0.5583801487972583, 0.014884354198883884, nan, 0.002639544391685435, nan, nan, nan, 0.0, 0.10095143590940929, nan, 0.5724235211143437, 0.0, 0.044599577412271614, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0] |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
gokuls/sa_bert_12_layer_modified_complete_training_48_v2 | gokuls | "2023-07-12T18:58:21Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-07-10T18:19:08Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_48_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_48_v2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9821
- Accuracy: 0.3685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.5933 | 0.05 | 10000 | 6.5711 | 0.1226 |
| 6.1523 | 0.11 | 20000 | 6.3425 | 0.1396 |
| 6.1308 | 0.16 | 30000 | 6.2468 | 0.1444 |
| 6.2297 | 0.22 | 40000 | 6.1895 | 0.1468 |
| 6.1484 | 0.27 | 50000 | 6.1483 | 0.1487 |
| 6.0591 | 0.33 | 60000 | 6.1205 | 0.1492 |
| 6.0199 | 0.38 | 70000 | 6.0862 | 0.1501 |
| 5.8666 | 0.44 | 80000 | 5.8875 | 0.1600 |
| 5.9153 | 0.49 | 90000 | 5.7648 | 0.1722 |
| 5.5197 | 0.55 | 100000 | 5.6349 | 0.1891 |
| 5.4384 | 0.6 | 110000 | 5.5023 | 0.2051 |
| 5.3973 | 0.66 | 120000 | 5.3651 | 0.2209 |
| 5.2627 | 0.71 | 130000 | 5.2054 | 0.2395 |
| 5.3179 | 0.76 | 140000 | 5.0131 | 0.2621 |
| 4.8813 | 0.82 | 150000 | 4.7153 | 0.2949 |
| 4.6653 | 0.87 | 160000 | 4.4651 | 0.3209 |
| 4.7227 | 0.93 | 170000 | 4.1752 | 0.3502 |
| 4.2892 | 0.98 | 180000 | 3.9821 | 0.3685 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3
|
niklaskahr/rl-test | niklaskahr | "2024-03-09T10:38:48Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-09T10:38:13Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 55.05 +/- 116.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StepLaw/StepLaw-N_429M-D_22.0B-LR7.81E-03-BS4194304 | StepLaw | "2025-04-15T15:47:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-11T05:40:53Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MayBashendy/Arabic_FineTuningAraBERT_AugV0_k25_task2_organization_fold1 | MayBashendy | "2024-11-19T18:20:06Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-19T18:04:27Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV0_k25_task2_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV0_k25_task2_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4983
- Qwk: 0.2222
- Mse: 0.4983
- Rmse: 0.7059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0171 | 2 | 6.2082 | 0.0238 | 6.2082 | 2.4916 |
| No log | 0.0342 | 4 | 3.3710 | 0.0 | 3.3710 | 1.8360 |
| No log | 0.0513 | 6 | 1.6135 | 0.1379 | 1.6135 | 1.2702 |
| No log | 0.0684 | 8 | 0.7627 | 0.0625 | 0.7627 | 0.8733 |
| No log | 0.0855 | 10 | 0.5832 | 0.0870 | 0.5832 | 0.7637 |
| No log | 0.1026 | 12 | 0.5320 | 0.125 | 0.5320 | 0.7294 |
| No log | 0.1197 | 14 | 0.5748 | 0.3158 | 0.5748 | 0.7581 |
| No log | 0.1368 | 16 | 0.6068 | 0.3390 | 0.6068 | 0.7790 |
| No log | 0.1538 | 18 | 0.6942 | 0.1231 | 0.6942 | 0.8332 |
| No log | 0.1709 | 20 | 0.5481 | 0.1818 | 0.5481 | 0.7403 |
| No log | 0.1880 | 22 | 0.5647 | 0.0 | 0.5647 | 0.7515 |
| No log | 0.2051 | 24 | 0.5723 | 0.0 | 0.5723 | 0.7565 |
| No log | 0.2222 | 26 | 0.5637 | 0.1702 | 0.5637 | 0.7508 |
| No log | 0.2393 | 28 | 0.5962 | 0.0870 | 0.5962 | 0.7721 |
| No log | 0.2564 | 30 | 0.6090 | 0.0870 | 0.6090 | 0.7804 |
| No log | 0.2735 | 32 | 0.6314 | 0.0 | 0.6314 | 0.7946 |
| No log | 0.2906 | 34 | 0.6095 | 0.0870 | 0.6095 | 0.7807 |
| No log | 0.3077 | 36 | 0.6255 | 0.0870 | 0.6255 | 0.7909 |
| No log | 0.3248 | 38 | 0.5998 | 0.125 | 0.5998 | 0.7745 |
| No log | 0.3419 | 40 | 0.6401 | 0.0870 | 0.6401 | 0.8001 |
| No log | 0.3590 | 42 | 0.6863 | 0.0 | 0.6863 | 0.8284 |
| No log | 0.3761 | 44 | 0.5911 | 0.0 | 0.5911 | 0.7688 |
| No log | 0.3932 | 46 | 0.5253 | 0.2800 | 0.5253 | 0.7248 |
| No log | 0.4103 | 48 | 0.5570 | 0.2105 | 0.5570 | 0.7463 |
| No log | 0.4274 | 50 | 0.5155 | 0.1818 | 0.5155 | 0.7180 |
| No log | 0.4444 | 52 | 0.5137 | 0.1702 | 0.5137 | 0.7167 |
| No log | 0.4615 | 54 | 0.5458 | 0.0870 | 0.5458 | 0.7388 |
| No log | 0.4786 | 56 | 0.4609 | 0.2642 | 0.4609 | 0.6789 |
| No log | 0.4957 | 58 | 0.5251 | 0.3607 | 0.5251 | 0.7246 |
| No log | 0.5128 | 60 | 0.5355 | 0.3607 | 0.5355 | 0.7318 |
| No log | 0.5299 | 62 | 0.4626 | 0.2373 | 0.4626 | 0.6802 |
| No log | 0.5470 | 64 | 0.5866 | 0.1702 | 0.5866 | 0.7659 |
| No log | 0.5641 | 66 | 0.8405 | 0.0400 | 0.8405 | 0.9168 |
| No log | 0.5812 | 68 | 0.9076 | 0.0727 | 0.9076 | 0.9527 |
| No log | 0.5983 | 70 | 0.7508 | 0.0 | 0.7508 | 0.8665 |
| No log | 0.6154 | 72 | 0.5574 | 0.1702 | 0.5574 | 0.7466 |
| No log | 0.6325 | 74 | 0.4989 | 0.3158 | 0.4989 | 0.7063 |
| No log | 0.6496 | 76 | 0.5893 | 0.3607 | 0.5893 | 0.7677 |
| No log | 0.6667 | 78 | 0.6050 | 0.2258 | 0.6050 | 0.7778 |
| No log | 0.6838 | 80 | 0.6042 | 0.3000 | 0.6042 | 0.7773 |
| No log | 0.7009 | 82 | 0.5657 | 0.1818 | 0.5657 | 0.7522 |
| No log | 0.7179 | 84 | 0.5329 | 0.1702 | 0.5329 | 0.7300 |
| No log | 0.7350 | 86 | 0.5101 | 0.25 | 0.5101 | 0.7142 |
| No log | 0.7521 | 88 | 0.4962 | 0.2642 | 0.4962 | 0.7044 |
| No log | 0.7692 | 90 | 0.4955 | 0.2642 | 0.4955 | 0.7039 |
| No log | 0.7863 | 92 | 0.5549 | 0.1702 | 0.5549 | 0.7449 |
| No log | 0.8034 | 94 | 0.6394 | 0.1176 | 0.6394 | 0.7996 |
| No log | 0.8205 | 96 | 0.5761 | 0.0870 | 0.5761 | 0.7590 |
| No log | 0.8376 | 98 | 0.5342 | 0.3774 | 0.5342 | 0.7309 |
| No log | 0.8547 | 100 | 0.5682 | 0.3158 | 0.5682 | 0.7538 |
| No log | 0.8718 | 102 | 0.5499 | 0.25 | 0.5499 | 0.7416 |
| No log | 0.8889 | 104 | 0.5971 | 0.0 | 0.5971 | 0.7727 |
| No log | 0.9060 | 106 | 0.6666 | 0.1818 | 0.6666 | 0.8165 |
| No log | 0.9231 | 108 | 0.6255 | 0.1818 | 0.6255 | 0.7909 |
| No log | 0.9402 | 110 | 0.5382 | 0.1600 | 0.5382 | 0.7336 |
| No log | 0.9573 | 112 | 0.4737 | 0.1818 | 0.4737 | 0.6882 |
| No log | 0.9744 | 114 | 0.4788 | 0.1053 | 0.4788 | 0.6919 |
| No log | 0.9915 | 116 | 0.4660 | 0.2222 | 0.4660 | 0.6827 |
| No log | 1.0085 | 118 | 0.5381 | 0.2500 | 0.5381 | 0.7335 |
| No log | 1.0256 | 120 | 0.7555 | 0.2154 | 0.7555 | 0.8692 |
| No log | 1.0427 | 122 | 0.7285 | 0.2154 | 0.7285 | 0.8535 |
| No log | 1.0598 | 124 | 0.5238 | 0.2623 | 0.5238 | 0.7238 |
| No log | 1.0769 | 126 | 0.4489 | 0.1429 | 0.4489 | 0.6700 |
| No log | 1.0940 | 128 | 0.4558 | 0.1724 | 0.4558 | 0.6751 |
| No log | 1.1111 | 130 | 0.4388 | 0.2642 | 0.4388 | 0.6624 |
| No log | 1.1282 | 132 | 0.4816 | 0.3077 | 0.4816 | 0.6940 |
| No log | 1.1453 | 134 | 0.5247 | 0.3226 | 0.5247 | 0.7243 |
| No log | 1.1624 | 136 | 0.4772 | 0.4923 | 0.4772 | 0.6908 |
| No log | 1.1795 | 138 | 0.4512 | 0.3636 | 0.4512 | 0.6717 |
| No log | 1.1966 | 140 | 0.4650 | 0.4348 | 0.4650 | 0.6819 |
| No log | 1.2137 | 142 | 0.4836 | 0.4706 | 0.4836 | 0.6954 |
| No log | 1.2308 | 144 | 0.5127 | 0.4507 | 0.5127 | 0.7161 |
| No log | 1.2479 | 146 | 0.4856 | 0.4706 | 0.4856 | 0.6969 |
| No log | 1.2650 | 148 | 0.4554 | 0.2759 | 0.4554 | 0.6749 |
| No log | 1.2821 | 150 | 0.4658 | 0.2373 | 0.4658 | 0.6825 |
| No log | 1.2991 | 152 | 0.4572 | 0.2759 | 0.4572 | 0.6762 |
| No log | 1.3162 | 154 | 0.4529 | 0.4 | 0.4529 | 0.6730 |
| No log | 1.3333 | 156 | 0.4726 | 0.2800 | 0.4726 | 0.6875 |
| No log | 1.3504 | 158 | 0.4596 | 0.2353 | 0.4596 | 0.6779 |
| No log | 1.3675 | 160 | 0.4489 | 0.2500 | 0.4489 | 0.6700 |
| No log | 1.3846 | 162 | 0.4622 | 0.2759 | 0.4622 | 0.6799 |
| No log | 1.4017 | 164 | 0.4886 | 0.2759 | 0.4886 | 0.6990 |
| No log | 1.4188 | 166 | 0.4777 | 0.2500 | 0.4777 | 0.6911 |
| No log | 1.4359 | 168 | 0.4892 | 0.2759 | 0.4892 | 0.6994 |
| No log | 1.4530 | 170 | 0.4788 | 0.2759 | 0.4788 | 0.6919 |
| No log | 1.4701 | 172 | 0.4583 | 0.2105 | 0.4583 | 0.6770 |
| No log | 1.4872 | 174 | 0.4468 | 0.3607 | 0.4468 | 0.6684 |
| No log | 1.5043 | 176 | 0.4447 | 0.2500 | 0.4447 | 0.6668 |
| No log | 1.5214 | 178 | 0.4353 | 0.2105 | 0.4353 | 0.6597 |
| No log | 1.5385 | 180 | 0.4478 | 0.3774 | 0.4478 | 0.6691 |
| No log | 1.5556 | 182 | 0.4576 | 0.2909 | 0.4576 | 0.6765 |
| No log | 1.5726 | 184 | 0.4775 | 0.2105 | 0.4775 | 0.6910 |
| No log | 1.5897 | 186 | 0.4998 | 0.2105 | 0.4998 | 0.7069 |
| No log | 1.6068 | 188 | 0.4887 | 0.2105 | 0.4887 | 0.6991 |
| No log | 1.6239 | 190 | 0.4656 | 0.2500 | 0.4656 | 0.6823 |
| No log | 1.6410 | 192 | 0.4648 | 0.3774 | 0.4648 | 0.6818 |
| No log | 1.6581 | 194 | 0.4661 | 0.1818 | 0.4661 | 0.6827 |
| No log | 1.6752 | 196 | 0.5110 | 0.2105 | 0.5110 | 0.7149 |
| No log | 1.6923 | 198 | 0.5478 | 0.4407 | 0.5478 | 0.7402 |
| No log | 1.7094 | 200 | 0.4917 | 0.2105 | 0.4917 | 0.7012 |
| No log | 1.7265 | 202 | 0.5091 | 0.4762 | 0.5091 | 0.7135 |
| No log | 1.7436 | 204 | 0.5961 | 0.3284 | 0.5961 | 0.7721 |
| No log | 1.7607 | 206 | 0.5813 | 0.1176 | 0.5813 | 0.7625 |
| No log | 1.7778 | 208 | 0.5009 | 0.1702 | 0.5009 | 0.7077 |
| No log | 1.7949 | 210 | 0.4805 | 0.3774 | 0.4805 | 0.6932 |
| No log | 1.8120 | 212 | 0.5108 | 0.3793 | 0.5108 | 0.7147 |
| No log | 1.8291 | 214 | 0.5162 | 0.4407 | 0.5162 | 0.7185 |
| No log | 1.8462 | 216 | 0.4735 | 0.2105 | 0.4735 | 0.6881 |
| No log | 1.8632 | 218 | 0.4626 | 0.4231 | 0.4626 | 0.6801 |
| No log | 1.8803 | 220 | 0.5212 | 0.4590 | 0.5212 | 0.7219 |
| No log | 1.8974 | 222 | 0.5387 | 0.4545 | 0.5387 | 0.7340 |
| No log | 1.9145 | 224 | 0.5198 | 0.4658 | 0.5198 | 0.7210 |
| No log | 1.9316 | 226 | 0.5068 | 0.3684 | 0.5068 | 0.7119 |
| No log | 1.9487 | 228 | 0.4951 | 0.3684 | 0.4951 | 0.7036 |
| No log | 1.9658 | 230 | 0.4863 | 0.2286 | 0.4863 | 0.6974 |
| No log | 1.9829 | 232 | 0.4747 | 0.2817 | 0.4747 | 0.6890 |
| No log | 2.0 | 234 | 0.4621 | 0.2258 | 0.4621 | 0.6798 |
| No log | 2.0171 | 236 | 0.4554 | 0.4000 | 0.4554 | 0.6749 |
| No log | 2.0342 | 238 | 0.4989 | 0.4 | 0.4989 | 0.7063 |
| No log | 2.0513 | 240 | 0.5127 | 0.4 | 0.5127 | 0.7160 |
| No log | 2.0684 | 242 | 0.4630 | 0.4 | 0.4630 | 0.6804 |
| No log | 2.0855 | 244 | 0.4268 | 0.4 | 0.4268 | 0.6533 |
| No log | 2.1026 | 246 | 0.4140 | 0.2222 | 0.4140 | 0.6435 |
| No log | 2.1197 | 248 | 0.4101 | 0.2759 | 0.4101 | 0.6404 |
| No log | 2.1368 | 250 | 0.4110 | 0.3158 | 0.4110 | 0.6411 |
| No log | 2.1538 | 252 | 0.4073 | 0.2759 | 0.4073 | 0.6382 |
| No log | 2.1709 | 254 | 0.4152 | 0.3158 | 0.4152 | 0.6444 |
| No log | 2.1880 | 256 | 0.4253 | 0.3077 | 0.4253 | 0.6522 |
| No log | 2.2051 | 258 | 0.4376 | 0.4000 | 0.4376 | 0.6615 |
| No log | 2.2222 | 260 | 0.4475 | 0.4 | 0.4475 | 0.6690 |
| No log | 2.2393 | 262 | 0.4508 | 0.4 | 0.4508 | 0.6714 |
| No log | 2.2564 | 264 | 0.4479 | 0.4590 | 0.4479 | 0.6693 |
| No log | 2.2735 | 266 | 0.4616 | 0.5 | 0.4616 | 0.6794 |
| No log | 2.2906 | 268 | 0.4515 | 0.5161 | 0.4515 | 0.6719 |
| No log | 2.3077 | 270 | 0.4710 | 0.4857 | 0.4710 | 0.6863 |
| No log | 2.3248 | 272 | 0.4533 | 0.4923 | 0.4533 | 0.6733 |
| No log | 2.3419 | 274 | 0.4358 | 0.3793 | 0.4358 | 0.6601 |
| No log | 2.3590 | 276 | 0.4338 | 0.3774 | 0.4338 | 0.6586 |
| No log | 2.3761 | 278 | 0.4588 | 0.3158 | 0.4588 | 0.6773 |
| No log | 2.3932 | 280 | 0.4634 | 0.4407 | 0.4634 | 0.6808 |
| No log | 2.4103 | 282 | 0.4350 | 0.5714 | 0.4350 | 0.6596 |
| No log | 2.4274 | 284 | 0.4129 | 0.4906 | 0.4129 | 0.6426 |
| No log | 2.4444 | 286 | 0.4056 | 0.4000 | 0.4056 | 0.6369 |
| No log | 2.4615 | 288 | 0.4306 | 0.1702 | 0.4306 | 0.6562 |
| No log | 2.4786 | 290 | 0.4753 | 0.1923 | 0.4753 | 0.6894 |
| No log | 2.4957 | 292 | 0.4552 | 0.4375 | 0.4552 | 0.6747 |
| No log | 2.5128 | 294 | 0.4009 | 0.4179 | 0.4009 | 0.6332 |
| No log | 2.5299 | 296 | 0.4095 | 0.4000 | 0.4095 | 0.6399 |
| No log | 2.5470 | 298 | 0.4066 | 0.3478 | 0.4066 | 0.6377 |
| No log | 2.5641 | 300 | 0.4008 | 0.3824 | 0.4008 | 0.6331 |
| No log | 2.5812 | 302 | 0.4572 | 0.4375 | 0.4572 | 0.6762 |
| No log | 2.5983 | 304 | 0.4688 | 0.3226 | 0.4688 | 0.6847 |
| No log | 2.6154 | 306 | 0.4060 | 0.5714 | 0.4060 | 0.6371 |
| No log | 2.6325 | 308 | 0.3946 | 0.2642 | 0.3946 | 0.6282 |
| No log | 2.6496 | 310 | 0.4292 | 0.4407 | 0.4292 | 0.6552 |
| No log | 2.6667 | 312 | 0.4310 | 0.4407 | 0.4310 | 0.6565 |
| No log | 2.6838 | 314 | 0.4162 | 0.3077 | 0.4162 | 0.6451 |
| No log | 2.7009 | 316 | 0.4766 | 0.5926 | 0.4766 | 0.6904 |
| No log | 2.7179 | 318 | 0.5341 | 0.5063 | 0.5341 | 0.7308 |
| No log | 2.7350 | 320 | 0.4937 | 0.5500 | 0.4937 | 0.7027 |
| No log | 2.7521 | 322 | 0.4400 | 0.5352 | 0.4400 | 0.6633 |
| No log | 2.7692 | 324 | 0.4138 | 0.3077 | 0.4138 | 0.6433 |
| No log | 2.7863 | 326 | 0.4476 | 0.4407 | 0.4476 | 0.6690 |
| No log | 2.8034 | 328 | 0.4736 | 0.4407 | 0.4736 | 0.6882 |
| No log | 2.8205 | 330 | 0.4640 | 0.4407 | 0.4640 | 0.6811 |
| No log | 2.8376 | 332 | 0.4359 | 0.5217 | 0.4359 | 0.6603 |
| No log | 2.8547 | 334 | 0.4385 | 0.5217 | 0.4385 | 0.6622 |
| No log | 2.8718 | 336 | 0.4348 | 0.4375 | 0.4348 | 0.6594 |
| No log | 2.8889 | 338 | 0.4299 | 0.4375 | 0.4299 | 0.6557 |
| No log | 2.9060 | 340 | 0.4126 | 0.3824 | 0.4126 | 0.6423 |
| No log | 2.9231 | 342 | 0.4083 | 0.5714 | 0.4083 | 0.6390 |
| No log | 2.9402 | 344 | 0.4133 | 0.5574 | 0.4133 | 0.6429 |
| No log | 2.9573 | 346 | 0.4097 | 0.3636 | 0.4097 | 0.6401 |
| No log | 2.9744 | 348 | 0.4188 | 0.4706 | 0.4188 | 0.6472 |
| No log | 2.9915 | 350 | 0.4286 | 0.4706 | 0.4286 | 0.6547 |
| No log | 3.0085 | 352 | 0.4288 | 0.3810 | 0.4288 | 0.6548 |
| No log | 3.0256 | 354 | 0.4325 | 0.3793 | 0.4325 | 0.6576 |
| No log | 3.0427 | 356 | 0.4150 | 0.3810 | 0.4150 | 0.6442 |
| No log | 3.0598 | 358 | 0.4088 | 0.4211 | 0.4088 | 0.6394 |
| No log | 3.0769 | 360 | 0.4074 | 0.4 | 0.4074 | 0.6383 |
| No log | 3.0940 | 362 | 0.4214 | 0.3793 | 0.4214 | 0.6492 |
| No log | 3.1111 | 364 | 0.4598 | 0.3793 | 0.4598 | 0.6781 |
| No log | 3.1282 | 366 | 0.4591 | 0.3793 | 0.4591 | 0.6775 |
| No log | 3.1453 | 368 | 0.4226 | 0.3793 | 0.4226 | 0.6501 |
| No log | 3.1624 | 370 | 0.4195 | 0.3793 | 0.4195 | 0.6477 |
| No log | 3.1795 | 372 | 0.4167 | 0.3810 | 0.4167 | 0.6455 |
| No log | 3.1966 | 374 | 0.4154 | 0.2258 | 0.4154 | 0.6445 |
| No log | 3.2137 | 376 | 0.4194 | 0.2623 | 0.4194 | 0.6476 |
| No log | 3.2308 | 378 | 0.4252 | 0.2623 | 0.4252 | 0.6521 |
| No log | 3.2479 | 380 | 0.4301 | 0.2623 | 0.4301 | 0.6558 |
| No log | 3.2650 | 382 | 0.4378 | 0.2759 | 0.4378 | 0.6616 |
| No log | 3.2821 | 384 | 0.4351 | 0.3158 | 0.4351 | 0.6596 |
| No log | 3.2991 | 386 | 0.4532 | 0.3158 | 0.4532 | 0.6732 |
| No log | 3.3162 | 388 | 0.5103 | 0.2373 | 0.5103 | 0.7143 |
| No log | 3.3333 | 390 | 0.5526 | 0.4 | 0.5526 | 0.7433 |
| No log | 3.3504 | 392 | 0.5081 | 0.3478 | 0.5081 | 0.7128 |
| No log | 3.3675 | 394 | 0.4626 | 0.5217 | 0.4626 | 0.6801 |
| No log | 3.3846 | 396 | 0.5050 | 0.4507 | 0.5050 | 0.7106 |
| No log | 3.4017 | 398 | 0.5186 | 0.4348 | 0.5186 | 0.7202 |
| No log | 3.4188 | 400 | 0.4713 | 0.4375 | 0.4713 | 0.6865 |
| No log | 3.4359 | 402 | 0.4113 | 0.4545 | 0.4113 | 0.6413 |
| No log | 3.4530 | 404 | 0.4253 | 0.4211 | 0.4253 | 0.6522 |
| No log | 3.4701 | 406 | 0.4467 | 0.4407 | 0.4467 | 0.6684 |
| No log | 3.4872 | 408 | 0.4311 | 0.4407 | 0.4311 | 0.6565 |
| No log | 3.5043 | 410 | 0.4032 | 0.4444 | 0.4032 | 0.6350 |
| No log | 3.5214 | 412 | 0.4205 | 0.4545 | 0.4205 | 0.6484 |
| No log | 3.5385 | 414 | 0.4822 | 0.3810 | 0.4822 | 0.6944 |
| No log | 3.5556 | 416 | 0.5142 | 0.3824 | 0.5142 | 0.7171 |
| No log | 3.5726 | 418 | 0.4722 | 0.3824 | 0.4722 | 0.6872 |
| No log | 3.5897 | 420 | 0.4172 | 0.5075 | 0.4172 | 0.6459 |
| No log | 3.6068 | 422 | 0.4064 | 0.4706 | 0.4064 | 0.6375 |
| No log | 3.6239 | 424 | 0.4035 | 0.4706 | 0.4035 | 0.6352 |
| No log | 3.6410 | 426 | 0.4071 | 0.5075 | 0.4071 | 0.6380 |
| No log | 3.6581 | 428 | 0.4280 | 0.5455 | 0.4280 | 0.6542 |
| No log | 3.6752 | 430 | 0.4372 | 0.5455 | 0.4372 | 0.6612 |
| No log | 3.6923 | 432 | 0.4281 | 0.5075 | 0.4281 | 0.6543 |
| No log | 3.7094 | 434 | 0.4293 | 0.1818 | 0.4293 | 0.6552 |
| No log | 3.7265 | 436 | 0.4603 | 0.4407 | 0.4603 | 0.6785 |
| No log | 3.7436 | 438 | 0.4748 | 0.4 | 0.4748 | 0.6891 |
| No log | 3.7607 | 440 | 0.4452 | 0.4407 | 0.4452 | 0.6672 |
| No log | 3.7778 | 442 | 0.4288 | 0.2105 | 0.4288 | 0.6548 |
| No log | 3.7949 | 444 | 0.4257 | 0.1429 | 0.4257 | 0.6524 |
| No log | 3.8120 | 446 | 0.4275 | 0.3774 | 0.4275 | 0.6539 |
| No log | 3.8291 | 448 | 0.4248 | 0.3774 | 0.4248 | 0.6517 |
| No log | 3.8462 | 450 | 0.4246 | 0.3774 | 0.4246 | 0.6516 |
| No log | 3.8632 | 452 | 0.4255 | 0.3774 | 0.4255 | 0.6523 |
| No log | 3.8803 | 454 | 0.4376 | 0.5161 | 0.4376 | 0.6615 |
| No log | 3.8974 | 456 | 0.4366 | 0.5161 | 0.4366 | 0.6608 |
| No log | 3.9145 | 458 | 0.4180 | 0.4231 | 0.4180 | 0.6465 |
| No log | 3.9316 | 460 | 0.4039 | 0.4 | 0.4039 | 0.6356 |
| No log | 3.9487 | 462 | 0.4252 | 0.3158 | 0.4252 | 0.6521 |
| No log | 3.9658 | 464 | 0.4171 | 0.3571 | 0.4171 | 0.6458 |
| No log | 3.9829 | 466 | 0.3864 | 0.5556 | 0.3864 | 0.6216 |
| No log | 4.0 | 468 | 0.3961 | 0.4000 | 0.3961 | 0.6293 |
| No log | 4.0171 | 470 | 0.3968 | 0.4000 | 0.3968 | 0.6299 |
| No log | 4.0342 | 472 | 0.4008 | 0.4000 | 0.4008 | 0.6331 |
| No log | 4.0513 | 474 | 0.3930 | 0.6038 | 0.3930 | 0.6269 |
| No log | 4.0684 | 476 | 0.4125 | 0.3571 | 0.4125 | 0.6423 |
| No log | 4.0855 | 478 | 0.4197 | 0.3571 | 0.4197 | 0.6478 |
| No log | 4.1026 | 480 | 0.4229 | 0.3571 | 0.4229 | 0.6503 |
| No log | 4.1197 | 482 | 0.4115 | 0.3571 | 0.4115 | 0.6415 |
| No log | 4.1368 | 484 | 0.4259 | 0.3571 | 0.4259 | 0.6526 |
| No log | 4.1538 | 486 | 0.4384 | 0.3571 | 0.4384 | 0.6621 |
| No log | 4.1709 | 488 | 0.4979 | 0.4211 | 0.4979 | 0.7056 |
| No log | 4.1880 | 490 | 0.5966 | 0.3636 | 0.5966 | 0.7724 |
| No log | 4.2051 | 492 | 0.6560 | 0.4324 | 0.6560 | 0.8099 |
| No log | 4.2222 | 494 | 0.6152 | 0.3636 | 0.6152 | 0.7844 |
| No log | 4.2393 | 496 | 0.5512 | 0.4000 | 0.5512 | 0.7424 |
| No log | 4.2564 | 498 | 0.5163 | 0.4194 | 0.5163 | 0.7185 |
| 0.3103 | 4.2735 | 500 | 0.4991 | 0.4507 | 0.4991 | 0.7065 |
| 0.3103 | 4.2906 | 502 | 0.4867 | 0.3143 | 0.4867 | 0.6976 |
| 0.3103 | 4.3077 | 504 | 0.4966 | 0.4211 | 0.4966 | 0.7047 |
| 0.3103 | 4.3248 | 506 | 0.5125 | 0.4211 | 0.5125 | 0.7159 |
| 0.3103 | 4.3419 | 508 | 0.5457 | 0.3793 | 0.5457 | 0.7387 |
| 0.3103 | 4.3590 | 510 | 0.5912 | 0.3793 | 0.5912 | 0.7689 |
| 0.3103 | 4.3761 | 512 | 0.6052 | 0.4 | 0.6052 | 0.7779 |
| 0.3103 | 4.3932 | 514 | 0.5669 | 0.3793 | 0.5669 | 0.7529 |
| 0.3103 | 4.4103 | 516 | 0.4985 | 0.4211 | 0.4985 | 0.7060 |
| 0.3103 | 4.4274 | 518 | 0.4429 | 0.3774 | 0.4429 | 0.6655 |
| 0.3103 | 4.4444 | 520 | 0.4660 | 0.4643 | 0.4660 | 0.6826 |
| 0.3103 | 4.4615 | 522 | 0.5136 | 0.4923 | 0.5136 | 0.7167 |
| 0.3103 | 4.4786 | 524 | 0.4910 | 0.4857 | 0.4910 | 0.7007 |
| 0.3103 | 4.4957 | 526 | 0.4404 | 0.5075 | 0.4404 | 0.6636 |
| 0.3103 | 4.5128 | 528 | 0.4226 | 0.5312 | 0.4226 | 0.6501 |
| 0.3103 | 4.5299 | 530 | 0.4947 | 0.4 | 0.4947 | 0.7033 |
| 0.3103 | 4.5470 | 532 | 0.5557 | 0.4 | 0.5557 | 0.7454 |
| 0.3103 | 4.5641 | 534 | 0.5685 | 0.4 | 0.5685 | 0.7540 |
| 0.3103 | 4.5812 | 536 | 0.4927 | 0.4 | 0.4927 | 0.7019 |
| 0.3103 | 4.5983 | 538 | 0.4156 | 0.4 | 0.4156 | 0.6447 |
| 0.3103 | 4.6154 | 540 | 0.4002 | 0.4906 | 0.4002 | 0.6326 |
| 0.3103 | 4.6325 | 542 | 0.4002 | 0.3529 | 0.4002 | 0.6326 |
| 0.3103 | 4.6496 | 544 | 0.4036 | 0.3529 | 0.4036 | 0.6353 |
| 0.3103 | 4.6667 | 546 | 0.4135 | 0.3529 | 0.4135 | 0.6431 |
| 0.3103 | 4.6838 | 548 | 0.4294 | 0.3529 | 0.4294 | 0.6553 |
| 0.3103 | 4.7009 | 550 | 0.4304 | 0.3529 | 0.4304 | 0.6561 |
| 0.3103 | 4.7179 | 552 | 0.4277 | 0.3529 | 0.4277 | 0.6540 |
| 0.3103 | 4.7350 | 554 | 0.4246 | 0.4590 | 0.4246 | 0.6516 |
| 0.3103 | 4.7521 | 556 | 0.4129 | 0.4590 | 0.4129 | 0.6426 |
| 0.3103 | 4.7692 | 558 | 0.4033 | 0.5161 | 0.4033 | 0.6351 |
| 0.3103 | 4.7863 | 560 | 0.3905 | 0.5161 | 0.3905 | 0.6249 |
| 0.3103 | 4.8034 | 562 | 0.3859 | 0.4906 | 0.3859 | 0.6212 |
| 0.3103 | 4.8205 | 564 | 0.3853 | 0.4906 | 0.3853 | 0.6207 |
| 0.3103 | 4.8376 | 566 | 0.3984 | 0.4906 | 0.3984 | 0.6312 |
| 0.3103 | 4.8547 | 568 | 0.4443 | 0.4444 | 0.4443 | 0.6666 |
| 0.3103 | 4.8718 | 570 | 0.4576 | 0.4 | 0.4576 | 0.6765 |
| 0.3103 | 4.8889 | 572 | 0.4409 | 0.4444 | 0.4409 | 0.6640 |
| 0.3103 | 4.9060 | 574 | 0.4319 | 0.4444 | 0.4319 | 0.6572 |
| 0.3103 | 4.9231 | 576 | 0.4066 | 0.4906 | 0.4066 | 0.6377 |
| 0.3103 | 4.9402 | 578 | 0.4002 | 0.4828 | 0.4002 | 0.6326 |
| 0.3103 | 4.9573 | 580 | 0.4058 | 0.4906 | 0.4058 | 0.6370 |
| 0.3103 | 4.9744 | 582 | 0.4309 | 0.4444 | 0.4309 | 0.6564 |
| 0.3103 | 4.9915 | 584 | 0.5111 | 0.2373 | 0.5111 | 0.7149 |
| 0.3103 | 5.0085 | 586 | 0.5814 | 0.4 | 0.5814 | 0.7625 |
| 0.3103 | 5.0256 | 588 | 0.6015 | 0.4 | 0.6015 | 0.7756 |
| 0.3103 | 5.0427 | 590 | 0.5598 | 0.4 | 0.5598 | 0.7482 |
| 0.3103 | 5.0598 | 592 | 0.4730 | 0.1724 | 0.4730 | 0.6877 |
| 0.3103 | 5.0769 | 594 | 0.4190 | 0.4444 | 0.4190 | 0.6473 |
| 0.3103 | 5.0940 | 596 | 0.4575 | 0.4590 | 0.4575 | 0.6764 |
| 0.3103 | 5.1111 | 598 | 0.5042 | 0.3824 | 0.5042 | 0.7101 |
| 0.3103 | 5.1282 | 600 | 0.4856 | 0.3793 | 0.4856 | 0.6969 |
| 0.3103 | 5.1453 | 602 | 0.4333 | 0.5161 | 0.4333 | 0.6583 |
| 0.3103 | 5.1624 | 604 | 0.4123 | 0.4231 | 0.4123 | 0.6421 |
| 0.3103 | 5.1795 | 606 | 0.4204 | 0.3774 | 0.4204 | 0.6484 |
| 0.3103 | 5.1966 | 608 | 0.4376 | 0.4 | 0.4376 | 0.6615 |
| 0.3103 | 5.2137 | 610 | 0.4449 | 0.4 | 0.4449 | 0.6670 |
| 0.3103 | 5.2308 | 612 | 0.4401 | 0.3774 | 0.4401 | 0.6634 |
| 0.3103 | 5.2479 | 614 | 0.4353 | 0.4231 | 0.4353 | 0.6598 |
| 0.3103 | 5.2650 | 616 | 0.4379 | 0.3529 | 0.4379 | 0.6618 |
| 0.3103 | 5.2821 | 618 | 0.4401 | 0.3529 | 0.4401 | 0.6634 |
| 0.3103 | 5.2991 | 620 | 0.4527 | 0.3774 | 0.4527 | 0.6728 |
| 0.3103 | 5.3162 | 622 | 0.4887 | 0.3333 | 0.4887 | 0.6990 |
| 0.3103 | 5.3333 | 624 | 0.5332 | 0.1429 | 0.5332 | 0.7302 |
| 0.3103 | 5.3504 | 626 | 0.5369 | 0.1818 | 0.5369 | 0.7327 |
| 0.3103 | 5.3675 | 628 | 0.5374 | 0.1818 | 0.5374 | 0.7331 |
| 0.3103 | 5.3846 | 630 | 0.5244 | 0.3333 | 0.5244 | 0.7241 |
| 0.3103 | 5.4017 | 632 | 0.5191 | 0.3333 | 0.5191 | 0.7205 |
| 0.3103 | 5.4188 | 634 | 0.5084 | 0.3774 | 0.5084 | 0.7130 |
| 0.3103 | 5.4359 | 636 | 0.5027 | 0.3774 | 0.5027 | 0.7090 |
| 0.3103 | 5.4530 | 638 | 0.5121 | 0.3774 | 0.5121 | 0.7156 |
| 0.3103 | 5.4701 | 640 | 0.5171 | 0.3774 | 0.5171 | 0.7191 |
| 0.3103 | 5.4872 | 642 | 0.5189 | 0.3774 | 0.5189 | 0.7203 |
| 0.3103 | 5.5043 | 644 | 0.5227 | 0.3774 | 0.5227 | 0.7230 |
| 0.3103 | 5.5214 | 646 | 0.5264 | 0.3774 | 0.5264 | 0.7255 |
| 0.3103 | 5.5385 | 648 | 0.5597 | 0.1818 | 0.5597 | 0.7481 |
| 0.3103 | 5.5556 | 650 | 0.6200 | 0.1818 | 0.6200 | 0.7874 |
| 0.3103 | 5.5726 | 652 | 0.6513 | 0.4000 | 0.6513 | 0.8070 |
| 0.3103 | 5.5897 | 654 | 0.6521 | 0.4000 | 0.6521 | 0.8075 |
| 0.3103 | 5.6068 | 656 | 0.6338 | 0.4000 | 0.6338 | 0.7961 |
| 0.3103 | 5.6239 | 658 | 0.6081 | 0.1818 | 0.6081 | 0.7798 |
| 0.3103 | 5.6410 | 660 | 0.5837 | 0.2154 | 0.5837 | 0.7640 |
| 0.3103 | 5.6581 | 662 | 0.5784 | 0.3143 | 0.5784 | 0.7605 |
| 0.3103 | 5.6752 | 664 | 0.5737 | 0.2154 | 0.5737 | 0.7574 |
| 0.3103 | 5.6923 | 666 | 0.5720 | 0.2000 | 0.5720 | 0.7563 |
| 0.3103 | 5.7094 | 668 | 0.5842 | 0.1818 | 0.5842 | 0.7643 |
| 0.3103 | 5.7265 | 670 | 0.5760 | 0.3158 | 0.5760 | 0.7589 |
| 0.3103 | 5.7436 | 672 | 0.5588 | 0.1818 | 0.5588 | 0.7475 |
| 0.3103 | 5.7607 | 674 | 0.5324 | 0.2222 | 0.5324 | 0.7296 |
| 0.3103 | 5.7778 | 676 | 0.5091 | 0.3774 | 0.5091 | 0.7135 |
| 0.3103 | 5.7949 | 678 | 0.4958 | 0.3793 | 0.4958 | 0.7042 |
| 0.3103 | 5.8120 | 680 | 0.4861 | 0.3793 | 0.4861 | 0.6972 |
| 0.3103 | 5.8291 | 682 | 0.4725 | 0.3793 | 0.4725 | 0.6874 |
| 0.3103 | 5.8462 | 684 | 0.4712 | 0.3793 | 0.4712 | 0.6864 |
| 0.3103 | 5.8632 | 686 | 0.4780 | 0.2373 | 0.4780 | 0.6914 |
| 0.3103 | 5.8803 | 688 | 0.4934 | 0.1818 | 0.4934 | 0.7024 |
| 0.3103 | 5.8974 | 690 | 0.4994 | 0.1818 | 0.4994 | 0.7067 |
| 0.3103 | 5.9145 | 692 | 0.4966 | 0.2000 | 0.4966 | 0.7047 |
| 0.3103 | 5.9316 | 694 | 0.4872 | 0.4762 | 0.4872 | 0.6980 |
| 0.3103 | 5.9487 | 696 | 0.4774 | 0.4762 | 0.4774 | 0.6909 |
| 0.3103 | 5.9658 | 698 | 0.4666 | 0.3793 | 0.4666 | 0.6831 |
| 0.3103 | 5.9829 | 700 | 0.4598 | 0.3774 | 0.4598 | 0.6781 |
| 0.3103 | 6.0 | 702 | 0.4582 | 0.3774 | 0.4582 | 0.6769 |
| 0.3103 | 6.0171 | 704 | 0.4616 | 0.1818 | 0.4616 | 0.6794 |
| 0.3103 | 6.0342 | 706 | 0.4657 | 0.1818 | 0.4657 | 0.6824 |
| 0.3103 | 6.0513 | 708 | 0.4618 | 0.3774 | 0.4618 | 0.6796 |
| 0.3103 | 6.0684 | 710 | 0.4508 | 0.3774 | 0.4508 | 0.6714 |
| 0.3103 | 6.0855 | 712 | 0.4440 | 0.3774 | 0.4440 | 0.6663 |
| 0.3103 | 6.1026 | 714 | 0.4519 | 0.4231 | 0.4519 | 0.6723 |
| 0.3103 | 6.1197 | 716 | 0.4580 | 0.4706 | 0.4580 | 0.6768 |
| 0.3103 | 6.1368 | 718 | 0.4563 | 0.4706 | 0.4563 | 0.6755 |
| 0.3103 | 6.1538 | 720 | 0.4382 | 0.3774 | 0.4382 | 0.6619 |
| 0.3103 | 6.1709 | 722 | 0.4325 | 0.3774 | 0.4325 | 0.6577 |
| 0.3103 | 6.1880 | 724 | 0.4408 | 0.3774 | 0.4408 | 0.6640 |
| 0.3103 | 6.2051 | 726 | 0.4543 | 0.2500 | 0.4543 | 0.6741 |
| 0.3103 | 6.2222 | 728 | 0.4775 | 0.3793 | 0.4775 | 0.6910 |
| 0.3103 | 6.2393 | 730 | 0.4907 | 0.3793 | 0.4907 | 0.7005 |
| 0.3103 | 6.2564 | 732 | 0.5179 | 0.3390 | 0.5179 | 0.7196 |
| 0.3103 | 6.2735 | 734 | 0.5790 | 0.4 | 0.5790 | 0.7609 |
| 0.3103 | 6.2906 | 736 | 0.5812 | 0.4 | 0.5812 | 0.7624 |
| 0.3103 | 6.3077 | 738 | 0.5425 | 0.4 | 0.5425 | 0.7366 |
| 0.3103 | 6.3248 | 740 | 0.4847 | 0.3390 | 0.4847 | 0.6962 |
| 0.3103 | 6.3419 | 742 | 0.4424 | 0.4 | 0.4424 | 0.6652 |
| 0.3103 | 6.3590 | 744 | 0.4115 | 0.4231 | 0.4115 | 0.6415 |
| 0.3103 | 6.3761 | 746 | 0.4285 | 0.4231 | 0.4285 | 0.6546 |
| 0.3103 | 6.3932 | 748 | 0.4454 | 0.4211 | 0.4454 | 0.6674 |
| 0.3103 | 6.4103 | 750 | 0.4528 | 0.5161 | 0.4528 | 0.6729 |
| 0.3103 | 6.4274 | 752 | 0.4489 | 0.5161 | 0.4489 | 0.6700 |
| 0.3103 | 6.4444 | 754 | 0.4286 | 0.5161 | 0.4286 | 0.6547 |
| 0.3103 | 6.4615 | 756 | 0.4177 | 0.4211 | 0.4177 | 0.6463 |
| 0.3103 | 6.4786 | 758 | 0.4370 | 0.2759 | 0.4370 | 0.6610 |
| 0.3103 | 6.4957 | 760 | 0.4672 | 0.2759 | 0.4672 | 0.6835 |
| 0.3103 | 6.5128 | 762 | 0.4852 | 0.4407 | 0.4852 | 0.6966 |
| 0.3103 | 6.5299 | 764 | 0.4776 | 0.2759 | 0.4776 | 0.6911 |
| 0.3103 | 6.5470 | 766 | 0.4635 | 0.2759 | 0.4635 | 0.6808 |
| 0.3103 | 6.5641 | 768 | 0.4477 | 0.4211 | 0.4477 | 0.6691 |
| 0.3103 | 6.5812 | 770 | 0.4439 | 0.5161 | 0.4439 | 0.6662 |
| 0.3103 | 6.5983 | 772 | 0.4491 | 0.5161 | 0.4491 | 0.6702 |
| 0.3103 | 6.6154 | 774 | 0.4512 | 0.5161 | 0.4512 | 0.6717 |
| 0.3103 | 6.6325 | 776 | 0.4468 | 0.5161 | 0.4468 | 0.6685 |
| 0.3103 | 6.6496 | 778 | 0.4529 | 0.3774 | 0.4529 | 0.6730 |
| 0.3103 | 6.6667 | 780 | 0.4967 | 0.2759 | 0.4967 | 0.7048 |
| 0.3103 | 6.6838 | 782 | 0.5462 | 0.2373 | 0.5462 | 0.7390 |
| 0.3103 | 6.7009 | 784 | 0.5406 | 0.2373 | 0.5406 | 0.7353 |
| 0.3103 | 6.7179 | 786 | 0.5365 | 0.2373 | 0.5365 | 0.7325 |
| 0.3103 | 6.7350 | 788 | 0.5061 | 0.2759 | 0.5061 | 0.7114 |
| 0.3103 | 6.7521 | 790 | 0.4622 | 0.4444 | 0.4622 | 0.6799 |
| 0.3103 | 6.7692 | 792 | 0.4481 | 0.3774 | 0.4481 | 0.6694 |
| 0.3103 | 6.7863 | 794 | 0.4606 | 0.4545 | 0.4606 | 0.6787 |
| 0.3103 | 6.8034 | 796 | 0.4642 | 0.4545 | 0.4642 | 0.6813 |
| 0.3103 | 6.8205 | 798 | 0.4626 | 0.4923 | 0.4626 | 0.6801 |
| 0.3103 | 6.8376 | 800 | 0.4501 | 0.4923 | 0.4501 | 0.6709 |
| 0.3103 | 6.8547 | 802 | 0.4395 | 0.4545 | 0.4395 | 0.6629 |
| 0.3103 | 6.8718 | 804 | 0.4291 | 0.5161 | 0.4291 | 0.6551 |
| 0.3103 | 6.8889 | 806 | 0.4383 | 0.3774 | 0.4383 | 0.6621 |
| 0.3103 | 6.9060 | 808 | 0.4839 | 0.2909 | 0.4839 | 0.6956 |
| 0.3103 | 6.9231 | 810 | 0.5335 | 0.2373 | 0.5335 | 0.7304 |
| 0.3103 | 6.9402 | 812 | 0.5640 | 0.4 | 0.5640 | 0.7510 |
| 0.3103 | 6.9573 | 814 | 0.5793 | 0.4 | 0.5793 | 0.7611 |
| 0.3103 | 6.9744 | 816 | 0.5747 | 0.4 | 0.5747 | 0.7581 |
| 0.3103 | 6.9915 | 818 | 0.5451 | 0.4 | 0.5451 | 0.7383 |
| 0.3103 | 7.0085 | 820 | 0.4996 | 0.2759 | 0.4996 | 0.7069 |
| 0.3103 | 7.0256 | 822 | 0.4673 | 0.3774 | 0.4673 | 0.6836 |
| 0.3103 | 7.0427 | 824 | 0.4546 | 0.3774 | 0.4546 | 0.6743 |
| 0.3103 | 7.0598 | 826 | 0.4540 | 0.3774 | 0.4540 | 0.6738 |
| 0.3103 | 7.0769 | 828 | 0.4644 | 0.3571 | 0.4644 | 0.6814 |
| 0.3103 | 7.0940 | 830 | 0.4741 | 0.4 | 0.4741 | 0.6885 |
| 0.3103 | 7.1111 | 832 | 0.4695 | 0.4 | 0.4695 | 0.6852 |
| 0.3103 | 7.1282 | 834 | 0.4580 | 0.3607 | 0.4580 | 0.6768 |
| 0.3103 | 7.1453 | 836 | 0.4458 | 0.4211 | 0.4458 | 0.6677 |
| 0.3103 | 7.1624 | 838 | 0.4458 | 0.3774 | 0.4458 | 0.6677 |
| 0.3103 | 7.1795 | 840 | 0.4508 | 0.3774 | 0.4508 | 0.6714 |
| 0.3103 | 7.1966 | 842 | 0.4520 | 0.3774 | 0.4520 | 0.6723 |
| 0.3103 | 7.2137 | 844 | 0.4522 | 0.3774 | 0.4522 | 0.6724 |
| 0.3103 | 7.2308 | 846 | 0.4596 | 0.3774 | 0.4596 | 0.6779 |
| 0.3103 | 7.2479 | 848 | 0.4675 | 0.3774 | 0.4675 | 0.6837 |
| 0.3103 | 7.2650 | 850 | 0.4731 | 0.3774 | 0.4731 | 0.6878 |
| 0.3103 | 7.2821 | 852 | 0.4850 | 0.3774 | 0.4850 | 0.6964 |
| 0.3103 | 7.2991 | 854 | 0.4918 | 0.3333 | 0.4918 | 0.7013 |
| 0.3103 | 7.3162 | 856 | 0.4865 | 0.3774 | 0.4865 | 0.6975 |
| 0.3103 | 7.3333 | 858 | 0.4815 | 0.3774 | 0.4815 | 0.6939 |
| 0.3103 | 7.3504 | 860 | 0.4819 | 0.3774 | 0.4819 | 0.6942 |
| 0.3103 | 7.3675 | 862 | 0.4815 | 0.3774 | 0.4815 | 0.6939 |
| 0.3103 | 7.3846 | 864 | 0.4700 | 0.3774 | 0.4700 | 0.6856 |
| 0.3103 | 7.4017 | 866 | 0.4604 | 0.3774 | 0.4604 | 0.6786 |
| 0.3103 | 7.4188 | 868 | 0.4579 | 0.3774 | 0.4579 | 0.6766 |
| 0.3103 | 7.4359 | 870 | 0.4568 | 0.3774 | 0.4568 | 0.6758 |
| 0.3103 | 7.4530 | 872 | 0.4561 | 0.3774 | 0.4561 | 0.6754 |
| 0.3103 | 7.4701 | 874 | 0.4575 | 0.3774 | 0.4575 | 0.6764 |
| 0.3103 | 7.4872 | 876 | 0.4655 | 0.3774 | 0.4655 | 0.6823 |
| 0.3103 | 7.5043 | 878 | 0.4809 | 0.3774 | 0.4809 | 0.6935 |
| 0.3103 | 7.5214 | 880 | 0.4887 | 0.3774 | 0.4887 | 0.6990 |
| 0.3103 | 7.5385 | 882 | 0.4938 | 0.3774 | 0.4938 | 0.7027 |
| 0.3103 | 7.5556 | 884 | 0.4983 | 0.1818 | 0.4983 | 0.7059 |
| 0.3103 | 7.5726 | 886 | 0.4996 | 0.1818 | 0.4996 | 0.7068 |
| 0.3103 | 7.5897 | 888 | 0.5011 | 0.1818 | 0.5011 | 0.7079 |
| 0.3103 | 7.6068 | 890 | 0.5025 | 0.1818 | 0.5025 | 0.7089 |
| 0.3103 | 7.6239 | 892 | 0.5031 | 0.1818 | 0.5031 | 0.7093 |
| 0.3103 | 7.6410 | 894 | 0.5019 | 0.1818 | 0.5019 | 0.7085 |
| 0.3103 | 7.6581 | 896 | 0.4981 | 0.2222 | 0.4981 | 0.7057 |
| 0.3103 | 7.6752 | 898 | 0.4982 | 0.2222 | 0.4982 | 0.7058 |
| 0.3103 | 7.6923 | 900 | 0.4949 | 0.3774 | 0.4949 | 0.7035 |
| 0.3103 | 7.7094 | 902 | 0.4939 | 0.3774 | 0.4939 | 0.7028 |
| 0.3103 | 7.7265 | 904 | 0.4954 | 0.3810 | 0.4954 | 0.7038 |
| 0.3103 | 7.7436 | 906 | 0.5010 | 0.4706 | 0.5010 | 0.7078 |
| 0.3103 | 7.7607 | 908 | 0.5036 | 0.4706 | 0.5036 | 0.7097 |
| 0.3103 | 7.7778 | 910 | 0.5028 | 0.3810 | 0.5028 | 0.7091 |
| 0.3103 | 7.7949 | 912 | 0.5085 | 0.3793 | 0.5085 | 0.7131 |
| 0.3103 | 7.8120 | 914 | 0.5089 | 0.3793 | 0.5089 | 0.7134 |
| 0.3103 | 7.8291 | 916 | 0.5105 | 0.3793 | 0.5105 | 0.7145 |
| 0.3103 | 7.8462 | 918 | 0.5084 | 0.3793 | 0.5084 | 0.7130 |
| 0.3103 | 7.8632 | 920 | 0.5035 | 0.3793 | 0.5035 | 0.7096 |
| 0.3103 | 7.8803 | 922 | 0.5018 | 0.3774 | 0.5018 | 0.7084 |
| 0.3103 | 7.8974 | 924 | 0.4953 | 0.3793 | 0.4953 | 0.7038 |
| 0.3103 | 7.9145 | 926 | 0.4902 | 0.3793 | 0.4902 | 0.7002 |
| 0.3103 | 7.9316 | 928 | 0.4868 | 0.3793 | 0.4868 | 0.6977 |
| 0.3103 | 7.9487 | 930 | 0.4886 | 0.3793 | 0.4886 | 0.6990 |
| 0.3103 | 7.9658 | 932 | 0.4936 | 0.3793 | 0.4936 | 0.7026 |
| 0.3103 | 7.9829 | 934 | 0.4953 | 0.3793 | 0.4953 | 0.7038 |
| 0.3103 | 8.0 | 936 | 0.4985 | 0.3793 | 0.4985 | 0.7061 |
| 0.3103 | 8.0171 | 938 | 0.4944 | 0.3793 | 0.4944 | 0.7032 |
| 0.3103 | 8.0342 | 940 | 0.4924 | 0.3793 | 0.4924 | 0.7017 |
| 0.3103 | 8.0513 | 942 | 0.4935 | 0.3793 | 0.4935 | 0.7025 |
| 0.3103 | 8.0684 | 944 | 0.4989 | 0.3793 | 0.4989 | 0.7063 |
| 0.3103 | 8.0855 | 946 | 0.5049 | 0.3793 | 0.5049 | 0.7106 |
| 0.3103 | 8.1026 | 948 | 0.5091 | 0.3793 | 0.5091 | 0.7135 |
| 0.3103 | 8.1197 | 950 | 0.5158 | 0.2222 | 0.5158 | 0.7182 |
| 0.3103 | 8.1368 | 952 | 0.5231 | 0.2222 | 0.5231 | 0.7233 |
| 0.3103 | 8.1538 | 954 | 0.5317 | 0.1818 | 0.5317 | 0.7292 |
| 0.3103 | 8.1709 | 956 | 0.5341 | 0.2500 | 0.5341 | 0.7308 |
| 0.3103 | 8.1880 | 958 | 0.5284 | 0.1818 | 0.5284 | 0.7269 |
| 0.3103 | 8.2051 | 960 | 0.5179 | 0.2222 | 0.5179 | 0.7196 |
| 0.3103 | 8.2222 | 962 | 0.5092 | 0.2222 | 0.5092 | 0.7136 |
| 0.3103 | 8.2393 | 964 | 0.5040 | 0.3774 | 0.5040 | 0.7099 |
| 0.3103 | 8.2564 | 966 | 0.4994 | 0.3774 | 0.4994 | 0.7067 |
| 0.3103 | 8.2735 | 968 | 0.4906 | 0.3774 | 0.4906 | 0.7004 |
| 0.3103 | 8.2906 | 970 | 0.4797 | 0.3774 | 0.4797 | 0.6926 |
| 0.3103 | 8.3077 | 972 | 0.4738 | 0.3793 | 0.4738 | 0.6884 |
| 0.3103 | 8.3248 | 974 | 0.4714 | 0.3793 | 0.4714 | 0.6866 |
| 0.3103 | 8.3419 | 976 | 0.4710 | 0.3793 | 0.4710 | 0.6863 |
| 0.3103 | 8.3590 | 978 | 0.4804 | 0.3793 | 0.4804 | 0.6931 |
| 0.3103 | 8.3761 | 980 | 0.4905 | 0.3793 | 0.4905 | 0.7004 |
| 0.3103 | 8.3932 | 982 | 0.5034 | 0.2373 | 0.5034 | 0.7095 |
| 0.3103 | 8.4103 | 984 | 0.5126 | 0.2222 | 0.5126 | 0.7160 |
| 0.3103 | 8.4274 | 986 | 0.5226 | 0.1429 | 0.5226 | 0.7229 |
| 0.3103 | 8.4444 | 988 | 0.5297 | 0.2105 | 0.5297 | 0.7278 |
| 0.3103 | 8.4615 | 990 | 0.5358 | 0.2105 | 0.5358 | 0.7320 |
| 0.3103 | 8.4786 | 992 | 0.5392 | 0.2105 | 0.5392 | 0.7343 |
| 0.3103 | 8.4957 | 994 | 0.5393 | 0.2105 | 0.5393 | 0.7344 |
| 0.3103 | 8.5128 | 996 | 0.5329 | 0.2105 | 0.5329 | 0.7300 |
| 0.3103 | 8.5299 | 998 | 0.5225 | 0.2500 | 0.5225 | 0.7228 |
| 0.0934 | 8.5470 | 1000 | 0.5112 | 0.2222 | 0.5112 | 0.7150 |
| 0.0934 | 8.5641 | 1002 | 0.4975 | 0.3774 | 0.4975 | 0.7053 |
| 0.0934 | 8.5812 | 1004 | 0.4842 | 0.3774 | 0.4842 | 0.6958 |
| 0.0934 | 8.5983 | 1006 | 0.4753 | 0.4231 | 0.4753 | 0.6894 |
| 0.0934 | 8.6154 | 1008 | 0.4684 | 0.4231 | 0.4684 | 0.6844 |
| 0.0934 | 8.6325 | 1010 | 0.4644 | 0.3529 | 0.4644 | 0.6815 |
| 0.0934 | 8.6496 | 1012 | 0.4636 | 0.4000 | 0.4636 | 0.6809 |
| 0.0934 | 8.6667 | 1014 | 0.4623 | 0.4000 | 0.4623 | 0.6800 |
| 0.0934 | 8.6838 | 1016 | 0.4615 | 0.4000 | 0.4615 | 0.6793 |
| 0.0934 | 8.7009 | 1018 | 0.4630 | 0.3529 | 0.4630 | 0.6804 |
| 0.0934 | 8.7179 | 1020 | 0.4711 | 0.4231 | 0.4711 | 0.6864 |
| 0.0934 | 8.7350 | 1022 | 0.4824 | 0.3774 | 0.4824 | 0.6946 |
| 0.0934 | 8.7521 | 1024 | 0.4890 | 0.3774 | 0.4890 | 0.6993 |
| 0.0934 | 8.7692 | 1026 | 0.4985 | 0.3774 | 0.4985 | 0.7061 |
| 0.0934 | 8.7863 | 1028 | 0.5070 | 0.3774 | 0.5070 | 0.7120 |
| 0.0934 | 8.8034 | 1030 | 0.5110 | 0.2222 | 0.5110 | 0.7148 |
| 0.0934 | 8.8205 | 1032 | 0.5134 | 0.2222 | 0.5134 | 0.7165 |
| 0.0934 | 8.8376 | 1034 | 0.5171 | 0.2222 | 0.5171 | 0.7191 |
| 0.0934 | 8.8547 | 1036 | 0.5211 | 0.2222 | 0.5211 | 0.7219 |
| 0.0934 | 8.8718 | 1038 | 0.5240 | 0.2222 | 0.5240 | 0.7238 |
| 0.0934 | 8.8889 | 1040 | 0.5242 | 0.2222 | 0.5242 | 0.7240 |
| 0.0934 | 8.9060 | 1042 | 0.5203 | 0.2222 | 0.5203 | 0.7214 |
| 0.0934 | 8.9231 | 1044 | 0.5168 | 0.2222 | 0.5168 | 0.7189 |
| 0.0934 | 8.9402 | 1046 | 0.5135 | 0.2222 | 0.5135 | 0.7166 |
| 0.0934 | 8.9573 | 1048 | 0.5089 | 0.3774 | 0.5089 | 0.7134 |
| 0.0934 | 8.9744 | 1050 | 0.5070 | 0.3774 | 0.5070 | 0.7120 |
| 0.0934 | 8.9915 | 1052 | 0.5048 | 0.3774 | 0.5048 | 0.7105 |
| 0.0934 | 9.0085 | 1054 | 0.5031 | 0.3793 | 0.5031 | 0.7093 |
| 0.0934 | 9.0256 | 1056 | 0.5041 | 0.3793 | 0.5041 | 0.7100 |
| 0.0934 | 9.0427 | 1058 | 0.5093 | 0.3793 | 0.5093 | 0.7137 |
| 0.0934 | 9.0598 | 1060 | 0.5127 | 0.3793 | 0.5127 | 0.7160 |
| 0.0934 | 9.0769 | 1062 | 0.5146 | 0.2373 | 0.5146 | 0.7173 |
| 0.0934 | 9.0940 | 1064 | 0.5155 | 0.2373 | 0.5155 | 0.7180 |
| 0.0934 | 9.1111 | 1066 | 0.5159 | 0.2222 | 0.5159 | 0.7183 |
| 0.0934 | 9.1282 | 1068 | 0.5172 | 0.2222 | 0.5172 | 0.7191 |
| 0.0934 | 9.1453 | 1070 | 0.5202 | 0.2222 | 0.5202 | 0.7213 |
| 0.0934 | 9.1624 | 1072 | 0.5235 | 0.2222 | 0.5235 | 0.7236 |
| 0.0934 | 9.1795 | 1074 | 0.5263 | 0.2222 | 0.5263 | 0.7255 |
| 0.0934 | 9.1966 | 1076 | 0.5259 | 0.2222 | 0.5259 | 0.7252 |
| 0.0934 | 9.2137 | 1078 | 0.5266 | 0.2222 | 0.5266 | 0.7257 |
| 0.0934 | 9.2308 | 1080 | 0.5259 | 0.2222 | 0.5259 | 0.7252 |
| 0.0934 | 9.2479 | 1082 | 0.5258 | 0.2222 | 0.5258 | 0.7251 |
| 0.0934 | 9.2650 | 1084 | 0.5242 | 0.2222 | 0.5242 | 0.7240 |
| 0.0934 | 9.2821 | 1086 | 0.5226 | 0.2222 | 0.5226 | 0.7229 |
| 0.0934 | 9.2991 | 1088 | 0.5211 | 0.2222 | 0.5211 | 0.7219 |
| 0.0934 | 9.3162 | 1090 | 0.5218 | 0.2222 | 0.5218 | 0.7223 |
| 0.0934 | 9.3333 | 1092 | 0.5210 | 0.2222 | 0.5210 | 0.7218 |
| 0.0934 | 9.3504 | 1094 | 0.5194 | 0.2222 | 0.5194 | 0.7207 |
| 0.0934 | 9.3675 | 1096 | 0.5182 | 0.2222 | 0.5182 | 0.7199 |
| 0.0934 | 9.3846 | 1098 | 0.5192 | 0.2222 | 0.5192 | 0.7206 |
| 0.0934 | 9.4017 | 1100 | 0.5218 | 0.2909 | 0.5218 | 0.7224 |
| 0.0934 | 9.4188 | 1102 | 0.5238 | 0.2500 | 0.5238 | 0.7237 |
| 0.0934 | 9.4359 | 1104 | 0.5258 | 0.2500 | 0.5258 | 0.7251 |
| 0.0934 | 9.4530 | 1106 | 0.5276 | 0.2500 | 0.5276 | 0.7263 |
| 0.0934 | 9.4701 | 1108 | 0.5277 | 0.3158 | 0.5277 | 0.7264 |
| 0.0934 | 9.4872 | 1110 | 0.5272 | 0.3158 | 0.5272 | 0.7261 |
| 0.0934 | 9.5043 | 1112 | 0.5265 | 0.3158 | 0.5265 | 0.7256 |
| 0.0934 | 9.5214 | 1114 | 0.5262 | 0.3158 | 0.5262 | 0.7254 |
| 0.0934 | 9.5385 | 1116 | 0.5250 | 0.3158 | 0.5250 | 0.7245 |
| 0.0934 | 9.5556 | 1118 | 0.5240 | 0.3158 | 0.5240 | 0.7239 |
| 0.0934 | 9.5726 | 1120 | 0.5223 | 0.3158 | 0.5223 | 0.7227 |
| 0.0934 | 9.5897 | 1122 | 0.5192 | 0.2500 | 0.5192 | 0.7206 |
| 0.0934 | 9.6068 | 1124 | 0.5169 | 0.2500 | 0.5169 | 0.7189 |
| 0.0934 | 9.6239 | 1126 | 0.5153 | 0.2909 | 0.5153 | 0.7179 |
| 0.0934 | 9.6410 | 1128 | 0.5140 | 0.2909 | 0.5140 | 0.7169 |
| 0.0934 | 9.6581 | 1130 | 0.5132 | 0.2909 | 0.5132 | 0.7164 |
| 0.0934 | 9.6752 | 1132 | 0.5115 | 0.2909 | 0.5115 | 0.7152 |
| 0.0934 | 9.6923 | 1134 | 0.5101 | 0.2909 | 0.5101 | 0.7142 |
| 0.0934 | 9.7094 | 1136 | 0.5091 | 0.2909 | 0.5091 | 0.7135 |
| 0.0934 | 9.7265 | 1138 | 0.5080 | 0.2222 | 0.5080 | 0.7127 |
| 0.0934 | 9.7436 | 1140 | 0.5068 | 0.2222 | 0.5068 | 0.7119 |
| 0.0934 | 9.7607 | 1142 | 0.5058 | 0.2222 | 0.5058 | 0.7112 |
| 0.0934 | 9.7778 | 1144 | 0.5050 | 0.2222 | 0.5050 | 0.7106 |
| 0.0934 | 9.7949 | 1146 | 0.5040 | 0.2222 | 0.5040 | 0.7099 |
| 0.0934 | 9.8120 | 1148 | 0.5029 | 0.2222 | 0.5029 | 0.7092 |
| 0.0934 | 9.8291 | 1150 | 0.5019 | 0.2222 | 0.5019 | 0.7084 |
| 0.0934 | 9.8462 | 1152 | 0.5007 | 0.2222 | 0.5007 | 0.7076 |
| 0.0934 | 9.8632 | 1154 | 0.4999 | 0.2222 | 0.4999 | 0.7070 |
| 0.0934 | 9.8803 | 1156 | 0.4992 | 0.2222 | 0.4992 | 0.7065 |
| 0.0934 | 9.8974 | 1158 | 0.4986 | 0.2222 | 0.4986 | 0.7061 |
| 0.0934 | 9.9145 | 1160 | 0.4983 | 0.2222 | 0.4983 | 0.7059 |
| 0.0934 | 9.9316 | 1162 | 0.4983 | 0.2222 | 0.4983 | 0.7059 |
| 0.0934 | 9.9487 | 1164 | 0.4982 | 0.2222 | 0.4982 | 0.7058 |
| 0.0934 | 9.9658 | 1166 | 0.4982 | 0.2222 | 0.4982 | 0.7058 |
| 0.0934 | 9.9829 | 1168 | 0.4983 | 0.2222 | 0.4983 | 0.7059 |
| 0.0934 | 10.0 | 1170 | 0.4983 | 0.2222 | 0.4983 | 0.7059 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
dextersud/sudbits_llama_pharma_clincal_model | dextersud | "2024-09-04T15:20:11Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-01T13:46:13Z" | ---
license: mit
language:
- en
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Shudhanshu Shekhar
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit | ModelCloud | "2024-07-23T18:36:13Z" | 338 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mistral-nemo",
"gptq",
"4bit",
"int4",
"gptqmodel",
"modelcloud",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | "2024-07-19T02:31:16Z" | ---
tags:
- mistral-nemo
- gptq
- 4bit
- int4
- gptqmodel
- modelcloud
---
This model has been quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
- **bits**: 4
- **group_size**: 128
- **desc_act**: true
- **static_groups**: false
- **sym**: true
- **lm_head**: false
- **damp_percent**: 0.01
- **true_sequential**: true
- **model_name_or_path**: ""
- **model_file_base_name**: "model"
- **quant_method**: "gptq"
- **checkpoint_format**: "gptq"
- **meta**:
- **quantizer**: "gptqmodel:0.9.9-dev0" |
vikp/pdf_postprocessor_t5 | vikp | "2023-12-22T05:55:40Z" | 9,414 | 15 | transformers | [
"transformers",
"pytorch",
"t5",
"token-classification",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-11-30T00:44:29Z" | Postprocess markdown generated from a pdf to clean up newlines, spaces, etc.
Used in [marker](https://github.com/VikParuchuri/marker). |
RIAL-AI/prueba | RIAL-AI | "2025-02-17T13:56:36Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-17T13:56:35Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Prueba
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('RIAL-AI/prueba', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
buddhilive/distilbert-finetuned-squad | buddhilive | "2023-09-09T20:10:15Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-09-09T19:50:01Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: buddhilive/distilbert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# buddhilive/distilbert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2331
- Validation Loss: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6852 | 0.0 | 0 |
| 1.3993 | 0.0 | 1 |
| 1.2331 | 0.0 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DoppelReflEx/MN-12B-FoxFrame-Yukina | DoppelReflEx | "2025-02-26T06:37:10Z" | 40 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DoppelReflEx/MN-12B-Mimicore-GreenSnake",
"base_model:merge:DoppelReflEx/MN-12B-Mimicore-GreenSnake",
"base_model:Epiculous/Violet_Twilight-v0.2",
"base_model:merge:Epiculous/Violet_Twilight-v0.2",
"base_model:IntervitensInc/Mistral-Nemo-Base-2407-chatml",
"base_model:merge:IntervitensInc/Mistral-Nemo-Base-2407-chatml",
"base_model:cgato/Nemo-12b-Humanize-KTO-Experimental-Latest",
"base_model:merge:cgato/Nemo-12b-Humanize-KTO-Experimental-Latest",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-08T08:58:38Z" | ---
license: cc-by-nc-4.0
base_model:
- DoppelReflEx/MN-12B-Mimicore-GreenSnake
- cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
- Epiculous/Violet_Twilight-v0.2
- IntervitensInc/Mistral-Nemo-Base-2407-chatml
library_name: transformers
tags:
- mergekit
- merge
---
Version: [Miyuri](https://huggingface.co/DoppelReflEx/MN-12B-FoxFrame-Miyuri) - [Yukina](#) - [Shinori](https://huggingface.co/DoppelReflEx/MN-12B-FoxFrame-Shinori)
# What is this?
A very nice merge series, to be real. I have test this and got the good result so far.
In my test character card, it's give me an **light tsundere and yandere** girl, LOL. You should try it too, or try any version you like most.
Good for RP,ERP.
PS: Sometimes, it have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest but that ```<|im_end|>``` token will appear and you must write some word or reroll the message.
## Template? ChatML, of course!
<details>
<summary>Merge Detail</summary>
<p>
### Models Merged
The following models were included in the merge:
* [DoppelReflEx/MN-12B-Mimicore-GreenSnake](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-GreenSnake)
* [cgato/Nemo-12b-Humanize-KTO-Experimental-Latest](https://huggingface.co/cgato/Nemo-12b-Humanize-KTO-Experimental-Latest)
* [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
parameters:
density: 0.9
weight: 1
- model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
parameters:
density: 0.5
weight: 0.7
- model: Epiculous/Violet_Twilight-v0.2
parameters:
density: 0.7
weight: 0.5
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base
```
</p>
</summary> |
MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF | MaziyarPanahi | "2024-05-21T18:37:50Z" | 62 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Kukedlc/NeuralSynthesis-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B",
"base_model:quantized:automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B"
] | text-generation | "2024-05-21T18:05:26Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:Kukedlc/NeuralSynthesis-7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF
base_model: automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B)
## Description
[MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF) contains GGUF format model files for [automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
com3dian/Bart-large-paper2slides-expander | com3dian | "2024-03-06T13:52:03Z" | 43 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"arxiv:1711.00043",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-07-13T15:35:11Z" | ---
language:
- en
widget:
- text: >
Bag-of-feature representations can be described by analogy to bag-of-words
representations.
- text: >
Self-attention is an attention mechanism relating different positions of a
single sequence in order to compute a representation of the sequence.
license:
- mit
pipeline_tag: text2text-generation
datasets:
- cnn_dailymail
---
# Bart-Large Expansion Model

This repository contains the **Bart-Large-paper2slides-expander Model**, which has been pre-trained on cnn-daily-mail dataset and fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '[Unsupervised Machine Translation Using Monolingual Corpora Only](https://arxiv.org/abs/1711.00043)'.
Its primary focus is to expand the **scientific text** by providing alternative and expanded versions with improved clarity and accuracy. The model is parallelly trained with the [**Bart-Large-paper2slides-summarizer Model**](https://huggingface.co/com3dian/Bart-large-paper2slides-summarizer) from the same contributor.
## Model Details
- **Model Architecture**: Bart-Large
- **Fine-tuning Dataset**: [Automatic Slide Generation from Scientific Papers](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers)
- **Fine-tuning Method**: Unsupervised Learning
[Bart](https://huggingface.co/transformers/model_doc/bart.html) (Bidirectional and Auto-Regressive Transformers) is a sequence-to-sequence (seq2seq) model developed by Facebook AI Research. It has shown exceptional performance in various natural language processing (NLP) tasks such as text summarization, text generation, and machine translation.
This particular model, Bart-Large, is the larger version of the Bart model. It consists of 12 encoder and decoder layers and has a total of 400 million parameters.
## Usage
To use this model, you can leverage the Hugging Face [Transformers](https://huggingface.co/transformers/) library. Here's an example of how to use it in Python:
```python
from transformers import BartTokenizer, BartForConditionalGeneration, pipeline
# Load the model and tokenizer
model_name = "com3dian/Bart-large-paper2slides-expander"
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
# Generate summary from input text
input_text = "Your input text here..."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids)
# Decode generated summaries
expanded_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(expanded_text)
# Or using the pipeline API
expander = pipeline("text2text-generation", model=model_name)
expanded_text = expander(input_text, max_length=50, min_length=30, do_sample=False)
print(expanded_text)
```
Ensure you have the `transformers` library installed before running the code. You can install it using `pip`:
```
pip install transformers
```
## Model Fine-tuning Details
The fine-tuning process for this model involved training on the slide generation dataset using unsupervised learning techniques. Unsupervised learning refers to training a model without explicit human-labeled targets. Instead, the model learns to back-expand the input provided by the summarization model, into the original texts.
The specific hyperparameters and training details used for fine-tuning this model are as follows:
- Batch Size: 4
- Learning Rate: 2e-6
- Training Steps: 3*7
- Optimizer: AdamW
## Acknowledgments
We would like to acknowledge the authors of the Bart model and the creators of the slide generation dataset for their valuable contributions, which have enabled the development of this fine-tuned model.
If you use this model or find it helpful in your work, please consider citing the original Bart model, the slide generation dataset, and [this paper](https://studenttheses.uu.nl/handle/20.500.12932/45939) to provide proper credit to the respective authors.
## License
This model and the associated code are released under the [MIT license](https://opensource.org/license/mit/). |
Doctor-Shotgun/TinyLlama-1.1B-32k | Doctor-Shotgun | "2024-02-02T21:25:35Z" | 116 | 28 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama 2",
"en",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-29T05:19:34Z" | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
language:
- en
tags:
- llama
- llama 2
---
# TinyLlama-1.1B-32k
32k context finetune of TinyLlama-1.1B using increased rope theta (rope frequency base) meant to serve as a long-context speculative decoding model.
Created using [TinyLlama-1.1B](https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints-after-1T-token) and further pretraining at 32768 context length on [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
Of note, the base checkpoint used was from commit "final model" fad4f1a5cd0563ac41349b8fec2e6e51156568a0 which was subsequently reverted, and not the current main branch 3T checkpoint of TinyLlama-1.1B.
[EXL2 Quants by turboderp](https://huggingface.co/turboderp/TinyLlama-1B-32k-exl2)
The quantized model fits alongside a 4.25bpw 70B model at 32k sequence length on a single A6000 and provides noticeable speed-up with speculative decoding.
### Wikitext (wikitext-2-raw-v1_train) Perplexity (64 rows) as evaluated via [exllamav2](https://github.com/turboderp/exllamav2):
| Model | 2048 | 4096 | 8192 | 16384 | 32768 |
| ---------------------- | ---------- | ---------- | ---------- | ---------- | ---------- |
| TinyLlama-1.1B | **8.5633** | 208.3586 | 863.7507 | 1600.5021 | 6981.9021 |
| **TinyLlama-1.1B-32k** | 8.6548 | **7.8339** | **7.4904** | **7.3674** | **7.1338** |
### Evaluation on HumanEval by [turboderp](https://huggingface.co/turboderp):
| Model | Pass@1 | Pass@10 |
| -------------------------------------- | --------------- | ----------- |
| TinyLlama-1.1B | **0.0841** | **0.1524** |
| TinyLlama-1.1B (NTK alpha=7.7) | 0.0598 | 0.1098 |
| TinyLlama-1.1B-32k-ckpt-554 | 0.0732 | 0.1402 |
| **TinyLlama-1.1B-32k** | 0.0829 | **0.1524** |
|
Superrrdamn/task-2-Qwen-Qwen2.5-3B-Instruct | Superrrdamn | "2025-01-25T05:27:09Z" | 272 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | "2025-01-25T03:03:56Z" | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Izzy-And-Supafly-Viral-Video-Link/Melina.Goransson.Leaked.Video.On.Social.Media.X.Twitter | Izzy-And-Supafly-Viral-Video-Link | "2025-02-21T18:51:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-21T18:51:22Z" |
<a href="http://bit.ly/3ZBGcrZ"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="http://bit.ly/3ZBGcrZ" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="http://bit.ly/3ZBGcrZ" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
keyi10/wav2vec2-model-training | keyi10 | "2024-03-28T11:04:15Z" | 99 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:lj_speech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-28T07:23:54Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- lj_speech
model-index:
- name: wav2vec2-model-training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-model-training
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the lj_speech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- Wer: 0.1446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5099 | 1.52 | 500 | 1.1323 | 0.6740 |
| 0.3293 | 3.05 | 1000 | 0.1430 | 0.1851 |
| 0.1047 | 4.57 | 1500 | 0.1258 | 0.1446 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.2
|
agentlans/zhtw-en | agentlans | "2025-03-11T08:27:37Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"zh",
"dataset:zetavg/coct-en-zh-tw-translations-twp-300k",
"base_model:Helsinki-NLP/opus-mt-zh-en",
"base_model:finetune:Helsinki-NLP/opus-mt-zh-en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2025-03-07T23:11:39Z" | ---
library_name: transformers
language:
- en
- zh
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-zh-en
tags:
- generated_from_trainer
model-index:
- name: zhtw-en
results: []
datasets:
- zetavg/coct-en-zh-tw-translations-twp-300k
pipeline_tag: translation
---
# zhtw-en
<details>
<summary>English</summary>
This model translates Traditional Chinese sentences into English, with a focus on understanding Taiwanese-style Traditional Chinese and producing more accurate English translations.
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) on the [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4350
- Num Input Tokens Seen: 55653732
## Intended Uses & Limitations
### Intended Use Cases
- Translating single sentences from Chinese to English.
- Applications requiring understanding of the Chinese language as spoken in Taiwan.
### Limitations
- Designed for single-sentence translation so will not perform well on longer texts without pre-processing
- Sometimes hallucinates or omits information, especially with short or long inputs
- Further fine-tuning will address this
## Training and Evaluation Data
This model was trained and evaluated on the [Corpus of Contemporary Taiwanese Mandarin (COCT) translations](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) dataset.
- **Training Data:** 80% of the COCT dataset
- **Validation Data:** 20% of the COCT dataset
</details>
<details>
<summary>Chinese</summary>
該模型旨在將繁體中文翻譯成英文,重點是理解台灣風格的繁體中文並產生更準確的英文翻譯。
模型基於 [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) 並在 [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) 資料集上進行微調。
在評估集上,模型取得了以下結果:
- **損失**:2.4350
- **處理的輸入標記數量**:55,653,732
## 預期用途與限制
### 預期用途
- 將單一中文句子翻譯為英文。
- 適用於需要理解台灣中文的應用程式。
### 限制
- 本模型專為單句翻譯設計,因此在處理較長文本時可能表現不佳,若未經預處理。
- 在某些情況下,模型可能會產生幻覺或遺漏信息,特別是在輸入過短或過長的情況下。
- 進一步的微調將有助於改善這些問題。
## 訓練與評估數據
該模型使用 [當代台灣普通話語料庫 (COCT)](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) 資料集進行訓練和評估。
- **訓練資料**:COCT 資料集的 80%
- **驗證資料**:COCT 資料集的 20%
</details>
## Example
```python
from transformers import pipeline
model_checkpoint = "agentlans/zhtw-en"
translator = pipeline("translation", model=model_checkpoint)
# 摘自中文維基百科的今日文章
# From Chinese Wikipedia's article of the day
translator("《阿奇大戰鐵血戰士》是2015年4至7月黑馬漫畫和阿奇漫畫在美國發行的四期限量連環漫畫圖書,由亞歷克斯·德坎皮創作,費爾南多·魯伊斯繪圖,屬跨公司跨界作品。")[0]['translation_text']
# 輸出
# Output
# Acer's Iron Blood Fighter is a four-year series of comic books published in the United States by Black Horse and Ah Chi comics from April to July of that year. The book was created by Alexander d'Campie and painted by Philnanto Ruiz. It is a cross-firm work.
# 與我自己的黃金標準翻譯比較:
# Compare with my own gold standard translation:
# "Archie vs. Predator" is a limited four-issue comic book series published by Black Horse and Archie Comics in the United States from April to July 2015. It was created by Alex de Campi and drawn by Fernando Ruiz. It's a crossover work.
```
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- **Learning Rate:** 5e-05
- **Train Batch Size:** 8
- **Eval Batch Size:** 8
- **Seed:** 42
- **Optimizer:** adamw\_torch with betas=(0.9,0.999) and epsilon=1e-08
- **LR Scheduler Type:** linear
- **Number of Epochs:** 3.0
### Training Results
<details>
<summary>Click here to see the training and validation losses</summary>
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 3.2254 | 0.0804 | 2500 | 2.9105 | 1493088 |
| 3.0946 | 0.1608 | 5000 | 2.8305 | 2990968 |
| 3.0473 | 0.2412 | 7500 | 2.7737 | 4477792 |
| 2.9633 | 0.3216 | 10000 | 2.7307 | 5967560 |
| 2.9355 | 0.4020 | 12500 | 2.6843 | 7463192 |
| 2.9076 | 0.4824 | 15000 | 2.6587 | 8950264 |
| 2.8714 | 0.5628 | 17500 | 2.6304 | 10443344 |
| 2.8716 | 0.6433 | 20000 | 2.6025 | 11951096 |
| 2.7989 | 0.7237 | 22500 | 2.5822 | 13432464 |
| 2.7941 | 0.8041 | 25000 | 2.5630 | 14919424 |
| 2.7692 | 0.8845 | 27500 | 2.5497 | 16415080 |
| 2.757 | 0.9649 | 30000 | 2.5388 | 17897832 |
| 2.7024 | 1.0453 | 32500 | 2.6006 | 19384812 |
| 2.7248 | 1.1257 | 35000 | 2.6042 | 20876844 |
| 2.6764 | 1.2061 | 37500 | 2.5923 | 22372340 |
| 2.6854 | 1.2865 | 40000 | 2.5793 | 23866100 |
| 2.683 | 1.3669 | 42500 | 2.5722 | 25348084 |
| 2.6871 | 1.4473 | 45000 | 2.5538 | 26854100 |
| 2.6551 | 1.5277 | 47500 | 2.5443 | 28332612 |
| 2.661 | 1.6081 | 50000 | 2.5278 | 29822156 |
| 2.6497 | 1.6885 | 52500 | 2.5266 | 31319476 |
| 2.6281 | 1.7689 | 55000 | 2.5116 | 32813220 |
| 2.6067 | 1.8494 | 57500 | 2.5047 | 34298052 |
| 2.6112 | 1.9298 | 60000 | 2.4935 | 35783604 |
| 2.5207 | 2.0102 | 62500 | 2.4946 | 37281092 |
| 2.4799 | 2.0906 | 65000 | 2.4916 | 38768588 |
| 2.4727 | 2.1710 | 67500 | 2.4866 | 40252972 |
| 2.4719 | 2.2514 | 70000 | 2.4760 | 41746300 |
| 2.4738 | 2.3318 | 72500 | 2.4713 | 43241188 |
| 2.4629 | 2.4122 | 75000 | 2.4630 | 44730244 |
| 2.4524 | 2.4926 | 77500 | 2.4575 | 46231060 |
| 2.435 | 2.5730 | 80000 | 2.4553 | 47718964 |
| 2.4621 | 2.6534 | 82500 | 2.4475 | 49209724 |
| 2.4492 | 2.7338 | 85000 | 2.4440 | 50712980 |
| 2.4536 | 2.8142 | 87500 | 2.4394 | 52204380 |
| 2.4148 | 2.8946 | 90000 | 2.4360 | 53695620 |
| 2.4243 | 2.9750 | 92500 | 2.4350 | 55190020 |
</details>
### Framework Versions
- Transformers 4.48.1
- Pytorch 2.3.0+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |
TheBloke/sheep-duck-llama-2-13B-GGUF | TheBloke | "2023-10-08T16:32:35Z" | 81 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Riiid/sheep-duck-llama-2-13b",
"base_model:quantized:Riiid/sheep-duck-llama-2-13b",
"license:llama2",
"region:us"
] | null | "2023-10-08T16:14:31Z" | ---
base_model: Riiid/sheep-duck-llama-2-13b
inference: false
license: llama2
model_creator: Riiid
model_name: Sheep Duck Llama 2 13B
model_type: llama
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Sheep Duck Llama 2 13B - GGUF
- Model creator: [Riiid](https://huggingface.co/Riiid)
- Original model: [Sheep Duck Llama 2 13B](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Riiid's Sheep Duck Llama 2 13B](https://huggingface.co/Riiid/sheep-duck-llama-2-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF)
* [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sheep-duck-llama-2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [sheep-duck-llama-2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [sheep-duck-llama-2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [sheep-duck-llama-2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [sheep-duck-llama-2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sheep-duck-llama-2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [sheep-duck-llama-2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [sheep-duck-llama-2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sheep-duck-llama-2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [sheep-duck-llama-2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [sheep-duck-llama-2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [sheep-duck-llama-2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-13B-GGUF/blob/main/sheep-duck-llama-2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/sheep-duck-llama-2-13B-GGUF and below it, a specific filename to download, such as: sheep-duck-llama-2-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/sheep-duck-llama-2-13B-GGUF sheep-duck-llama-2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/sheep-duck-llama-2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/sheep-duck-llama-2-13B-GGUF sheep-duck-llama-2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m sheep-duck-llama-2-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/sheep-duck-llama-2-13B-GGUF", model_file="sheep-duck-llama-2-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Riiid's Sheep Duck Llama 2 13B
No original model card was available.
<!-- original-model-card end -->
|
Best000/fbb8e6cd-8b54-49b4-a720-f73b86915043 | Best000 | "2025-02-05T18:00:43Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | "2025-02-05T17:58:30Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fbb8e6cd-8b54-49b4-a720-f73b86915043
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# fbb8e6cd-8b54-49b4-a720-f73b86915043
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ThuyNT/CS505_COQE_viT5_total_Instruction4_AOSPL_v1 | ThuyNT | "2024-07-03T23:45:05Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-03T22:45:20Z" | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_total_Instruction4_AOSPL_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_total_Instruction4_AOSPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Bryan32/IllustriousPonyXL | Bryan32 | "2025-04-14T10:54:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-01-29T02:40:28Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nayohan/polyglot-ko-12.8b-Inst | nayohan | "2023-11-17T17:03:46Z" | 6,591 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"polyglot-ko",
"gpt-neox",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"base_model:EleutherAI/polyglot-ko-12.8b",
"base_model:finetune:EleutherAI/polyglot-ko-12.8b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-07T07:47:27Z" | ---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- polyglot-ko
- gpt-neox
- KoQuality
base_model: EleutherAI/polyglot-ko-12.8b
---
This model is a instruct-tuned poylglot-ko-12.8b model, using 10% [Kullm, OIG, KoAlpaca] Instruction dataset.
len10_k100_mrand_n0.01.json -> 29step
## Training hyperparameters
- learning_rate: 5e-5
- seed: 42
- distributed_type: multi-GPU (A100 40G) + CPU offloading (512GB)
- num_devices: 1
- train_batch_size: 4
- gradient_accumulation_steps: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
## Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- deepspeed 0.11.1
- accelerate 0.24.1 |
stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | "2023-10-26T11:07:22Z" | 12 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-24T10:02:41Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT 64k as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|------------------|--------------|--------------|--------------|-----------------|
| `bs8-e10-lr3e-05` | [0.8389][1] | [0.8466][2] | [0.8299][3] | [0.8391][4] | [0.8427][5] | 0.8394 ± 0.0062 |
| `bs4-e10-lr3e-05` | [0.8279][6] | [0.8364][7] | [0.8404][8] | [0.8382][9] | [0.8371][10] | 0.836 ± 0.0048 |
| `bs8-e10-lr5e-05` | [0.8418][11] | [0.8337][12] | [0.831][13] | [0.8346][14] | [0.8352][15] | 0.8353 ± 0.004 |
| `bs4-e10-lr5e-05` | [0.831][16] | [**0.8239**][17] | [0.7784][18] | [0.8313][19] | [0.8191][20] | 0.8167 ± 0.022 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
brettbbb/mc_cot_64 | brettbbb | "2023-12-08T04:02:28Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | "2023-12-08T03:44:36Z" | ---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_64
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
abc88767/5c73 | abc88767 | "2024-05-15T09:10:44Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-15T09:09:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf | RichardErkhov | "2025-04-05T10:39:36Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-05T10:13:07Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
potato_wizard_v38 - GGUF
- Model creator: https://huggingface.co/ShadrackImai/
- Original model: https://huggingface.co/ShadrackImai/potato_wizard_v38/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [potato_wizard_v38.Q2_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q2_K.gguf) | Q2_K | 0.54GB |
| [potato_wizard_v38.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [potato_wizard_v38.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [potato_wizard_v38.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [potato_wizard_v38.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [potato_wizard_v38.Q3_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K.gguf) | Q3_K | 0.64GB |
| [potato_wizard_v38.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [potato_wizard_v38.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [potato_wizard_v38.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [potato_wizard_v38.Q4_0.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_0.gguf) | Q4_0 | 0.72GB |
| [potato_wizard_v38.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [potato_wizard_v38.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [potato_wizard_v38.Q4_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_K.gguf) | Q4_K | 0.75GB |
| [potato_wizard_v38.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [potato_wizard_v38.Q4_1.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_1.gguf) | Q4_1 | 0.77GB |
| [potato_wizard_v38.Q5_0.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_0.gguf) | Q5_0 | 0.83GB |
| [potato_wizard_v38.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [potato_wizard_v38.Q5_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_K.gguf) | Q5_K | 0.85GB |
| [potato_wizard_v38.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [potato_wizard_v38.Q5_1.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_1.gguf) | Q5_1 | 0.89GB |
| [potato_wizard_v38.Q6_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q6_K.gguf) | Q6_K | 0.95GB |
| [potato_wizard_v38.Q8_0.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ivangrapher/353670ce-d193-4795-8439-8453e13bfadc | ivangrapher | "2025-01-27T15:02:47Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-27T14:58:07Z" | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 353670ce-d193-4795-8439-8453e13bfadc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b5b0fb294f7f9cf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b5b0fb294f7f9cf_train_data.json
type:
field_input: rational_answer
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 256
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ivangrapher/353670ce-d193-4795-8439-8453e13bfadc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b5b0fb294f7f9cf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6f87c971-88ec-4d3e-be58-6859d8964c73
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6f87c971-88ec-4d3e-be58-6859d8964c73
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 353670ce-d193-4795-8439-8453e13bfadc
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | nan |
| 0.0 | 0.0057 | 5 | nan |
| 0.0 | 0.0114 | 10 | nan |
| 0.0 | 0.0171 | 15 | nan |
| 0.0 | 0.0228 | 20 | nan |
| 0.0 | 0.0286 | 25 | nan |
| 0.0 | 0.0343 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf | RichardErkhov | "2024-07-02T11:18:50Z" | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T11:15:39Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
smol_llama-220M-open_instruct - GGUF
- Model creator: https://huggingface.co/BEE-spoke-data/
- Original model: https://huggingface.co/BEE-spoke-data/smol_llama-220M-open_instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [smol_llama-220M-open_instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q2_K.gguf) | Q2_K | 0.09GB |
| [smol_llama-220M-open_instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.IQ3_XS.gguf) | IQ3_XS | 0.1GB |
| [smol_llama-220M-open_instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.IQ3_S.gguf) | IQ3_S | 0.1GB |
| [smol_llama-220M-open_instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q3_K_S.gguf) | Q3_K_S | 0.1GB |
| [smol_llama-220M-open_instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.IQ3_M.gguf) | IQ3_M | 0.1GB |
| [smol_llama-220M-open_instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q3_K.gguf) | Q3_K | 0.11GB |
| [smol_llama-220M-open_instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q3_K_M.gguf) | Q3_K_M | 0.11GB |
| [smol_llama-220M-open_instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q3_K_L.gguf) | Q3_K_L | 0.11GB |
| [smol_llama-220M-open_instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.IQ4_XS.gguf) | IQ4_XS | 0.12GB |
| [smol_llama-220M-open_instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q4_0.gguf) | Q4_0 | 0.12GB |
| [smol_llama-220M-open_instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.IQ4_NL.gguf) | IQ4_NL | 0.12GB |
| [smol_llama-220M-open_instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q4_K_S.gguf) | Q4_K_S | 0.12GB |
| [smol_llama-220M-open_instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q4_K.gguf) | Q4_K | 0.13GB |
| [smol_llama-220M-open_instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q4_K_M.gguf) | Q4_K_M | 0.13GB |
| [smol_llama-220M-open_instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q4_1.gguf) | Q4_1 | 0.13GB |
| [smol_llama-220M-open_instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q5_0.gguf) | Q5_0 | 0.14GB |
| [smol_llama-220M-open_instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q5_K_S.gguf) | Q5_K_S | 0.14GB |
| [smol_llama-220M-open_instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q5_K.gguf) | Q5_K | 0.15GB |
| [smol_llama-220M-open_instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q5_K_M.gguf) | Q5_K_M | 0.15GB |
| [smol_llama-220M-open_instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q5_1.gguf) | Q5_1 | 0.16GB |
| [smol_llama-220M-open_instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q6_K.gguf) | Q6_K | 0.17GB |
| [smol_llama-220M-open_instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-open_instruct-gguf/blob/main/smol_llama-220M-open_instruct.Q8_0.gguf) | Q8_0 | 0.22GB |
Original model description:
---
license: apache-2.0
datasets:
- VMware/open-instruct
base_model: BEE-spoke-data/smol_llama-220M-GQA
inference:
parameters:
do_sample: true
renormalize_logits: true
temperature: 0.25
top_p: 0.95
top_k: 50
min_new_tokens: 2
max_new_tokens: 96
repetition_penalty: 1.04
no_repeat_ngram_size: 6
epsilon_cutoff: 0.0006
widget:
- text: "Below is an instruction that describes a task, paired with an input that\
\ provides further context. Write a response that appropriately completes the\
\ request. \n \n### Instruction: \n \nWrite an ode to Chipotle burritos.\
\ \n \n### Response: \n"
example_title: burritos
model-index:
- name: smol_llama-220M-open_instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 25.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 29.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.06
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-open_instruct
name: Open LLM Leaderboard
---
# BEE-spoke-data/smol_llama-220M-open_instruct
> Please note that this is an experiment, and the model has limitations because it is smol.
prompt format is alpaca.
```
Below is an instruction that describes a task, paired with an input that
provides further context. Write a response that appropriately completes
the request.
### Instruction:
How can I increase my meme production/output? Currently, I only create them in ancient babylonian which is time consuming.
### Response:
```
This was **not** trained using a separate 'inputs' field (as `VMware/open-instruct` doesn't use one).
## Example
Output on the text above ^. The inference API is set to sample with low temp so you should see (_at least slightly_) different generations each time.

Note that the inference API parameters used here are an initial educated guess, and may be updated over time:
```yml
inference:
parameters:
do_sample: true
renormalize_logits: true
temperature: 0.25
top_p: 0.95
top_k: 50
min_new_tokens: 2
max_new_tokens: 96
repetition_penalty: 1.04
no_repeat_ngram_size: 6
epsilon_cutoff: 0.0006
```
Feel free to experiment with the parameters using the model in Python and let us know if you have improved results with other params!
## Data
This was trained on `VMware/open-instruct` so do whatever you want, provided it falls under the base apache-2.0 license :)
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-220M-open_instruct)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.19|
|AI2 Reasoning Challenge (25-Shot)|25.00|
|HellaSwag (10-Shot) |29.71|
|MMLU (5-Shot) |26.11|
|TruthfulQA (0-shot) |44.06|
|Winogrande (5-shot) |50.28|
|GSM8k (5-shot) | 0.00|
|
LoneStriker/Metis-0.4-6.0bpw-h6-exl2 | LoneStriker | "2023-12-19T11:46:05Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Mihaiii/Metis-0.3",
"base_model:finetune:Mihaiii/Metis-0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-19T10:44:56Z" | ---
base_model: Mihaiii/Metis-0.3
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
---
This is a merge between Metis-0.3 and Metis-0.1 having Metis-0.1 as base.
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
<|system|>
{system_message} </s>
<|user|>
{prompt} </s>
<|assistant|>
```
Merge config:
```yaml
slices:
- sources:
- model: Mihaiii/Metis-0.3
layer_range: [0, 32]
- model: Mihaiii/Metis-0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Mihaiii/Metis-0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
mrferr3t/016a721c-e6bd-4b1e-9c76-70e1a41cb774 | mrferr3t | "2025-02-02T05:43:10Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"region:us"
] | null | "2025-02-02T05:32:04Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 016a721c-e6bd-4b1e-9c76-70e1a41cb774
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f4e310178b93c9eb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f4e310178b93c9eb_train_data.json
type:
field_instruction: title
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/016a721c-e6bd-4b1e-9c76-70e1a41cb774
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/f4e310178b93c9eb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 26771e6a-63b8-4475-9ff6-b6158ca6f757
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 26771e6a-63b8-4475-9ff6-b6158ca6f757
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 016a721c-e6bd-4b1e-9c76-70e1a41cb774
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.5945 | 0.0001 | 1 | 2.1116 |
| 5.6837 | 0.0062 | 50 | 1.3371 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
1231czx/7b_mistral_2e6_sft3epoch | 1231czx | "2024-07-08T05:57:14Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-08T05:54:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VFiona/opus-mt-it-en-finetuned_20000-it-to-en | VFiona | "2023-07-20T13:55:52Z" | 104 | 1 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-07-20T12:41:57Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-it-en-finetuned_20000-it-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-it-en-finetuned_20000-it-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-it-en](https://huggingface.co/Helsinki-NLP/opus-mt-it-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3483
- Bleu: 75.7583
- Gen Len: 21.996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3971 | 1.0 | 1125 | 0.3483 | 75.7583 | 21.996 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.11.0
|
NohTow/ModernBERT-base-DPR-fullneg-gte-0.0002 | NohTow | "2024-12-22T16:33:24Z" | 14 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1770649",
"loss:CachedMultipleNegativesRankingLoss",
"dataset:cfli/bge-full-data",
"arxiv:1908.10084",
"arxiv:2101.06983",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-12-22T16:33:01Z" | ---
datasets:
- cfli/bge-full-data
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1770649
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: what is the pulse in your wrist called
sentences:
- 'Pulse cm up the forearm is suggestive of arteriosclerosis. In coarctation of
aorta, femoral pulse may be significantly delayed as compared to radial pulse
(unless there is coexisting aortic regurgitation). The delay can also be observed
in supravalvar aortic stenosis. Several pulse patterns can be of clinically significance.
These include: Chinese medicine has focused on the pulse in the upper limbs for
several centuries. The concept of pulse diagnosis is essentially based on palpation
and observations of the radial and ulnar volar pulses at the readily accessible
wrist. Although the pulse can be felt in multiple places in the head, people'
- Pulse diagnosis into three positions on each wrist. The first pulse closest to
the wrist is the "cun" (inch, 寸) position, the second "guan" (gate, 關), and the
third pulse position furthest away from the wrist is the "chi" (foot, 尺). There
are several systems of diagnostic interpretation of pulse findings utilised in
the Chinese medicine system. Some systems (Cun Kou) utilise overall pulse qualities,
looking at changes in the assessed parameters of the pulse to derive one of the
traditional 28 pulse types. Other approaches focus on individual pulse positions,
looking at changes in the pulse quality and strength within the
- 'Pre-hospital trauma assessment inside of the wrist toward the thumb. For unresponsive
adult patients, checking pulse is performed by palpating the carotid artery in
the neck. For infants and small children, the pulse is usually assessed in the
brachial artery in the upper arm. After confirming that the pulse is present,
the final step in the initial assessment for a trauma patient is to check for
any gross bleeding and to control it. Should a pulse not be detected, or in the
case of a child or infant is present but at a rate less than 60, cardiovascular
resuscitation will be commenced. Steps:'
- Pulse Pulse In medicine, a pulse represents the tactile arterial palpation of
the heartbeat by trained fingertips. The pulse may be palpated in any place that
allows an artery to be compressed near the surface of the body, such as at the
neck (carotid artery), wrist (radial artery), at the groin (femoral artery), behind
the knee (popliteal artery), near the ankle joint (posterior tibial artery), and
on foot (dorsalis pedis artery). Pulse (or the count of arterial pulse per minute)
is equivalent to measuring the heart rate. The heart rate can also be measured
by listening to the heart beat by
- Pulse diagnosis dosha. The middle finger and ring finger are placed next to the
index finger and represents consequently the Pitta and Kapha doshas of the patient.
Pulse can be measured in the superficial, middle, and deep levels thus obtaining
more information regarding energy imbalance of the patient. The main sites for
pulse assessment are the radial arteries in the left and right wrists, where it
overlays the styloid process of the radius, between the wrist crease and extending
proximal, approximately 5 cm in length (or 1.9 cun, where the forearm is 12 cun).
In traditional Chinese medicine, the pulse is divided
- 'Pulse auscultation, traditionally using a stethoscope and counting it for a minute.
The radial pulse is commonly measured using three fingers. This has a reason:
the finger closest to the heart is used to occlude the pulse pressure, the middle
finger is used get a crude estimate of the blood pressure, and the finger most
distal to the heart (usually the ring finger) is used to nullify the effect of
the ulnar pulse as the two arteries are connected via the palmar arches (superficial
and deep). The study of the pulse is known as sphygmology. Claudius Galen was
perhaps the first'
- source_sentence: Diet and Mass Conservation--We weigh as much as we eat?
sentences:
- '[This thread](_URL_0_) contains a good comment string based on /u/Redwing999
experience and some written sources on insect obesity.'
- We have two chemicals. One that tells us that we're full and the other that tells
us something gives us pleasure. Through evolution, they made sure that the balance
wouldn't tip. Now, the latter can override the former. That means you eat cake
because it gives you pleasure even though you're full as hell. The balance has
tipped and temptation gets in our way. This is one of the reasons for obesity!
- This question actually has nothing to do with the law of conservation of mass
or energy. You don't take up more mass by exercising; in fact, you technically
**lose** mass because you are sweating water and other substances out, as well
as converting your food into heat and having this heat escape your body. It's
just that when your muscle fibers are damaged through exercise, they "over-heal"
(to put it very unsophisticated-sounding). The food you eat contributes to feeding
these growing muscles, which adds more mass to your body. So you *lose* mass through
exercising, but more than make up for it with a proper diet.
- A professor of nutrition went on a diet for 10 weeks, consisting largely of twinkies,
oreos, and doritos. While still maintaining multivitamins and a protein shake
daily with occasional greens as well to not go completely off the deep end. After
the 10 weeks of controlling a steady stream of 1,800 calories a day he lost 27
pounds, lowered his bad cholesterol by 20% and upping his good cholesterol also
by 20%. Most weight loss is from a steady intake in a caloric deficit (IE don't
eat 1,700 of your daily 1,800 in one meal). If you do this make sure to also grab
multivitamins if you don't already have them, and ensure you're getting some protein.
Obviously these are also just short term results, and it's not recommended you
over indulge in junk food over a balanced diet and daily exercise. Article link
here (sorry for ghetto link I'm on my phone) _URL_0_
- This is a great question. I hope we get some real answers. I don't chew my food
much, I'm pretty skinny and eat a ton..I always wondered if chewing less makes
less nutrients available for absorption
- There is a tremendous amount of misinformation surrounding calories and weight.
[This blog entry](_URL_0_) does a good job of presenting why people so often get
confused with regards to thermodynamics and food. There's a lot to learn, but
it's a good start.
- source_sentence: Are Jett Pangan and Jon Fratelli both from Scotland?
sentences:
- Gary Lightbody Gary Lightbody (born 15 June 1976) is a Northern Irish singer,
songwriter, guitarist and multi-instrumentalist, best known as the lead singer
and rhythm guitarist of the Northern Irish-Scottish rock band Snow Patrol.
- Ray Wilson (musician) Raymond Wilson (born 8 September 1968) is a Scottish musician,
best known as vocalist in the post-grunge band Stiltskin, and in Genesis from
1996 to 1998.
- Peter Frampton Peter Kenneth Frampton (born 22 April 1950) is an English rock
musician, singer, songwriter, producer, and guitarist. He was previously associated
with the bands Humble Pie and The Herd. At the end of his 'group' career was Frampton's
international breakthrough album his live release, "Frampton Comes Alive!" The
album sold in the United States more than 8 million copies and spawned several
single hits. Since then he has released several major albums. He has also worked
with David Bowie and both Matt Cameron and Mike McCready from Pearl Jam, among
others.
- Rob Wainwright (rugby union) Robert Iain Wainwright (born 22 March 1965 in Perth,
Scotland) is a former rugby union footballer who was capped 37 times for Scotland
(Captain 16 times) and once for the British and Irish Lions. He played flanker.
- Bert Jansch Herbert "Bert" Jansch (3 November 1943 – 5 October 2011) was a Scottish
folk musician and founding member of the band Pentangle. He was born in Glasgow
and came to prominence in London in the 1960s, as an acoustic guitarist, as well
as a singer-songwriter. He recorded at least 25 albums and toured extensively
from the 1960s to the 21st century.
- Jett Pangan Jett Pangan (born Reginald Pangan on June 21, 1968) is a Filipino
singer and guitarist best known for fronting the Filipino rock bands The Dawn,
and the now defunct Jett Pangan Group. He is also an actor, appearing in several
TV and films, most notably his role in "Tulad ng Dati". He is the half-brother
of John Lapus.
- source_sentence: How can I control my mind from thinking too much?
sentences:
- Why is it that we always think about anything too much which is not even worth
thinking?
- When I'm around people I love my mind goes blank. As I get closer to someone it
gets worse and worse. How can I change my way of thinking?
- Why am I thinking too much?
- Why am I thinking too much about everything?
- If I keep choosing not to fully think about a concept or grab onto it when it
appears in my mind while I am reading or doing something else, am I damaging my
brain's ability to understand and act on those things in the future?
- How do I keep my mind from thinking too much over a thing?
- source_sentence: Who won 23 World Rally Championships, two in particular with the
Lancia Delta Group A rally car?
sentences:
- Lancia Delta Group A The Lancia Delta Group A is a Group A rally car built for
the Martini Lancia by Lancia to compete in the World Rally Championship. It is
based upon the Lancia Delta road car and replaced the Lancia Delta S4. The car
was introduced for the 1987 World Rally Championship season and dominated the
World Rally Championship, scoring 46 WRC victories overall and winning the constructors'
championship a record six times in a row from 1987 to 1992, in addition to drivers'
championship titles for Juha Kankkunen (1987 and 1991) and Miki Biasion (1988
and 1989), making Lancia the most successful marque in the history of the WRC
and the Delta the most successful car.
- Luis Moya Luis Rodríguez Moya, better known as Luis Moya (born 23 September 1960
in La Coruña, Spain) is a now-retired rally co-driver, synonymous with driver
Carlos Sainz. He is the third most successful co-driver in the history of the
World Rally Championship (WRC), after Daniel Elena and Timo Rautiainen
- 2016 World Rally Championship-3 The 2016 World Rally Championship-3 was the fourth
season of the World Rally Championship-3, an auto racing championship recognized
by the Fédération Internationale de l'Automobile, ran in support of the World
Rally Championship. It was created when the Group R class of rally car was introduced
in 2013. The Championship was composed of fourteen rallies, and drivers and teams
had to nominate a maximum of six events. The best five results counted towards
the championship.
- 2015 Rally Catalunya The 2015 Rally Catalunya (formally the 51º Rally RACC Catalunya
– Costa Daurada) was the twelfth round of the 2015 World Rally Championship. The
race was held over four days between 22 October and 25 October 2015, and operated
out of Salou, Catalonia, Spain. Volkswagen's Andreas Mikkelsen won the race, his
first win in the World Rally Championship.
- 'Lancia Rally 037 The Lancia Rally ("Tipo 151", also known as the Lancia Rally
037, Lancia 037 or Lancia-Abarth #037 from its Abarth project code "037") was
a mid-engine sports car and rally car built by Lancia in the early 1980s to compete
in the FIA Group B World Rally Championship. Driven by Markku Alén, Attilio Bettega,
and Walter Röhrl, the car won Lancia the manufacturers'' world championship in
the 1983 season. It was the last rear-wheel drive car to win the WRC.'
- John Lund (racing driver) John Lund (born 12 January 1954) is a BriSCA Formula
1 Stock Cars racing driver from Rimington, Lancashire who races under number 53.
Lund is one of the most successful stock car drivers of all time and holds the
current record for the most World Championship wins.
model-index:
- name: SentenceTransformer
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.22
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.52
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.64
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.20666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.084
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.08833333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.26666666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.30833333333333335
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.35666666666666663
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2839842522559327
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.37471428571428567
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2232144898031751
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.7
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.74
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.48
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.43200000000000005
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.3760000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.07263002775640012
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.11337585016033845
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.15857516982468162
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.23454122344078535
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4732884231947513
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.738888888888889
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.334802367685341
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.88
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.88
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20799999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10799999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8266666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9233333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9533333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9733333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.920250305861268
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9266666666666665
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8908062417949636
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.46
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.62
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.68
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.74
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2866666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.22399999999999995
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.24452380952380953
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4037936507936508
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4890396825396825
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5964206349206349
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.49008883369308526
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5513333333333333
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4201188803513742
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.82
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.94
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.94
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.96
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.82
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.38666666666666655
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.24799999999999997
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.132
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.41
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.58
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.62
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.66
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6699619900438456
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8795238095238095
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5983592359151276
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.72
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.82
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08199999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.34
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.72
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.82
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5747097116234108
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4967380952380951
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5049567742199321
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.36
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.36
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2933333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.296
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.22
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.015576651798182985
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.03488791186499473
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.06408574388859087
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.07971201227506045
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.25470834876894616
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4443888888888889
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.09234660597563751
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.46
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.66
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.78
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08399999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.45
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.61
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.66
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.75
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6060972125930784
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.569079365079365
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5645161933196003
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.94
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.98
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.98
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.94
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.40666666666666657
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.25199999999999995
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8173333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9453333333333334
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.956
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9933333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9593808852823181
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9625
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9422896825396825
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.48
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.66
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.74
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.48
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.276
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.20199999999999996
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.10166666666666668
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.20666666666666664
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2846666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.41566666666666663
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3972031938693105
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5927698412698412
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.304253910983743
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.26
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.64
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21333333333333335
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.64
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5855962294470597
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.48385714285714276
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.48932444805879344
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.54
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.305
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.47
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.54
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.45719389021878065
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4177460317460317
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.41560718364765603
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.4897959183673469
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8367346938775511
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8979591836734694
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9795918367346939
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4897959183673469
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5034013605442177
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4653061224489797
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.36122448979591837
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03552902483256089
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.10751588484963115
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.16516486949441941
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.24301991055992778
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4179864214131331
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6742306446388079
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.30799309847167516
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.519215070643642
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7028257456828885
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7659968602825747
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8276609105180532
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.519215070643642
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31103087388801676
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2401004709576139
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1599403453689168
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.30517380876238104
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4539671767437396
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5168614460831313
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5863610600920313
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5454192075588399
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6240336149111658
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4683530086743616
name: Cosine Map@100
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [bge-full-data](https://huggingface.co/datasets/cfli/bge-full-data) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [bge-full-data](https://huggingface.co/datasets/cfli/bge-full-data)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("NohTow/ModernBERT-base-DPR-fullneg-gte-0.0002")
# Run inference
sentences = [
'Who won 23 World Rally Championships, two in particular with the Lancia Delta Group A rally car?',
"Lancia Delta Group A The Lancia Delta Group A is a Group A rally car built for the Martini Lancia by Lancia to compete in the World Rally Championship. It is based upon the Lancia Delta road car and replaced the Lancia Delta S4. The car was introduced for the 1987 World Rally Championship season and dominated the World Rally Championship, scoring 46 WRC victories overall and winning the constructors' championship a record six times in a row from 1987 to 1992, in addition to drivers' championship titles for Juha Kankkunen (1987 and 1991) and Miki Biasion (1988 and 1989), making Lancia the most successful marque in the history of the WRC and the Delta the most successful car.",
'Luis Moya Luis Rodríguez Moya, better known as Luis Moya (born 23 September 1960 in La Coruña, Spain) is a now-retired rally co-driver, synonymous with driver Carlos Sainz. He is the third most successful co-driver in the history of the World Rally Championship (WRC), after Daniel Elena and Timo Rautiainen',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.22 | 0.7 | 0.88 | 0.46 | 0.82 | 0.34 | 0.36 | 0.46 | 0.94 | 0.48 | 0.26 | 0.34 | 0.4898 |
| cosine_accuracy@3 | 0.52 | 0.74 | 0.96 | 0.62 | 0.94 | 0.6 | 0.5 | 0.66 | 0.98 | 0.66 | 0.64 | 0.48 | 0.8367 |
| cosine_accuracy@5 | 0.6 | 0.8 | 1.0 | 0.68 | 0.94 | 0.72 | 0.56 | 0.7 | 0.98 | 0.74 | 0.8 | 0.54 | 0.898 |
| cosine_accuracy@10 | 0.64 | 0.86 | 1.0 | 0.74 | 0.96 | 0.82 | 0.62 | 0.78 | 1.0 | 0.86 | 0.9 | 0.6 | 0.9796 |
| cosine_precision@1 | 0.22 | 0.7 | 0.88 | 0.46 | 0.82 | 0.34 | 0.36 | 0.46 | 0.94 | 0.48 | 0.26 | 0.34 | 0.4898 |
| cosine_precision@3 | 0.2067 | 0.48 | 0.3333 | 0.2867 | 0.3867 | 0.2 | 0.2933 | 0.22 | 0.4067 | 0.3333 | 0.2133 | 0.18 | 0.5034 |
| cosine_precision@5 | 0.144 | 0.432 | 0.208 | 0.224 | 0.248 | 0.144 | 0.296 | 0.144 | 0.252 | 0.276 | 0.16 | 0.128 | 0.4653 |
| cosine_precision@10 | 0.084 | 0.376 | 0.108 | 0.134 | 0.132 | 0.082 | 0.22 | 0.084 | 0.136 | 0.202 | 0.09 | 0.07 | 0.3612 |
| cosine_recall@1 | 0.0883 | 0.0726 | 0.8267 | 0.2445 | 0.41 | 0.34 | 0.0156 | 0.45 | 0.8173 | 0.1017 | 0.26 | 0.305 | 0.0355 |
| cosine_recall@3 | 0.2667 | 0.1134 | 0.9233 | 0.4038 | 0.58 | 0.6 | 0.0349 | 0.61 | 0.9453 | 0.2067 | 0.64 | 0.47 | 0.1075 |
| cosine_recall@5 | 0.3083 | 0.1586 | 0.9533 | 0.489 | 0.62 | 0.72 | 0.0641 | 0.66 | 0.956 | 0.2847 | 0.8 | 0.54 | 0.1652 |
| cosine_recall@10 | 0.3567 | 0.2345 | 0.9733 | 0.5964 | 0.66 | 0.82 | 0.0797 | 0.75 | 0.9933 | 0.4157 | 0.9 | 0.6 | 0.243 |
| **cosine_ndcg@10** | **0.284** | **0.4733** | **0.9203** | **0.4901** | **0.67** | **0.5747** | **0.2547** | **0.6061** | **0.9594** | **0.3972** | **0.5856** | **0.4572** | **0.418** |
| cosine_mrr@10 | 0.3747 | 0.7389 | 0.9267 | 0.5513 | 0.8795 | 0.4967 | 0.4444 | 0.5691 | 0.9625 | 0.5928 | 0.4839 | 0.4177 | 0.6742 |
| cosine_map@100 | 0.2232 | 0.3348 | 0.8908 | 0.4201 | 0.5984 | 0.505 | 0.0923 | 0.5645 | 0.9423 | 0.3043 | 0.4893 | 0.4156 | 0.308 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5192 |
| cosine_accuracy@3 | 0.7028 |
| cosine_accuracy@5 | 0.766 |
| cosine_accuracy@10 | 0.8277 |
| cosine_precision@1 | 0.5192 |
| cosine_precision@3 | 0.311 |
| cosine_precision@5 | 0.2401 |
| cosine_precision@10 | 0.1599 |
| cosine_recall@1 | 0.3052 |
| cosine_recall@3 | 0.454 |
| cosine_recall@5 | 0.5169 |
| cosine_recall@10 | 0.5864 |
| **cosine_ndcg@10** | **0.5454** |
| cosine_mrr@10 | 0.624 |
| cosine_map@100 | 0.4684 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### bge-full-data
* Dataset: [bge-full-data](https://huggingface.co/datasets/cfli/bge-full-data) at [78f5c99](https://huggingface.co/datasets/cfli/bge-full-data/tree/78f5c99b534a52824ab26bd24edda592eaed4c7a)
* Size: 1,770,649 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_0</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, and <code>negative_4</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_0 | negative_1 | negative_2 | negative_3 | negative_4 |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string | string | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 20.15 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 173.18 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 170.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 167.88 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 167.95 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 166.32 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 167.63 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive | negative_0 | negative_1 | negative_2 | negative_3 | negative_4 |
|:-----------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What happens if you eat raw chicken?</code> | <code>What are the dangers of eating raw chicken?</code> | <code>Does all raw chicken have salmonella?</code> | <code>How safe is to eat chicken during pregnancy?</code> | <code>What meats are safe to eat raw?</code> | <code>What are some natural obligations of chicken?</code> | <code>Is it safe to eat raw egg?</code> |
| <code>how long does it take for a wren egg to hatch</code> | <code>How often does a mother Wren sit on her nest? I don't know for sure about how long Wrens usually spend on the nest at one sitting.. (Sorry couldn't resist the joke) However, the eggs usually hatch in 13-18 days, so if there were no hatchlings when that time elapsed, then you'd know for sure that she hadn't been behaving normally.</code> | <code>- When you are trying to hatch Tennessee red quail eggs, it will take approximately 23 days. You should perform lock down on the egg at 20 days. This is a period of time whe … n there should be no disturbances because hatching is likely to begin.urkey eggs usually take 21 to 28 days to hatch depending on what they are incubated in like an incubator or by a hen.</code> | <code>How long does it take an egg to hatch? For an average Eagle it would have a time for about 32-36 days, but the average time for an Eagle egg to hatch is about 35 days. 28 people found this useful.</code> | <code>- When you are trying to hatch Tennessee red quail eggs, it will take approximately 23 days. You should perform lock down on the egg at 20 days. This is a period of time whe … n there should be no disturbances because hatching is likely to begin.urkey eggs usually take 21 to 28 days to hatch depending on what they are incubated in like an incubator or by a hen. It also depends on how fertile it is and how it is cared … for.</code> | <code>- Actually this may vary depending on the kind of bird finch, the eggs hatch in between 12 - 16 days or 3 weeks.The nestlings fledge in 18 - 19 days.ctually this may vary depending on the kind of bird finch, the eggs hatch in between 12 - 16 days or 3 weeks.</code> | <code>- Welcome, and thanks for visiting the virtual home of the Whitestown Fire Department. Whether you’re stopping by to obtain information on our department, place a comment, track our progress and events, or just looking at the great pictures of our top notch personnel in action, we hope that you find what you’re after. Please feel free to provide feedback or contact us for any questions you may have.</code> |
| <code>can you have schizophrenia and bipolar</code> | <code>Can you have both bipolar disorder and schizophrenia? Health Mental Health Can you have both bipolar disorder and schizophrenia? I'm 19 and was diagnosed with Bipolar Disorder almost 2 years ago. I also have some symptoms of schizophrenia such as auditory hallucinations and occasional visual ones as well and occasional paranoia. Ok the paranoia is pretty frequent. So yea, Can you have both of them? I know some of the symptoms can be... show more Follow 6 answers Answers Relevance Rating Newest Oldest Best Answer: yes you can, but some people with bipolar disorder have hallucinations and delusions from the bipolar disorder. only a psychiatrist could diagnose you i guess. Source (s):er nurse Zach · 9 years ago0 0 Comment Asker's rating Yes, one can have both bipolar disorder and schizophrenia, as the cause is one and the same - a spirit (ghost). Not only are the mood swings imparted by the associated spirit, but the alleged hallucinations are as well. The voices that those diagnosed as h...</code> | <code>Dual Diagnosis: Understanding Sex Addiction With Bipolar Disorder Dual Diagnosis: Understanding Sex Addiction With Bipolar Disorder February 5, 2015 Dual Diagnosis Bipolar disorder manifests itself in one college student’s “need” to sexually expose himself on campus. Marty was diagnosed with bipolar 1 disorder in the spring of his junior year in college. The symptoms had emerged during adolescence, but it wasn’t until a particularly startling manic episode that Marty’s doctor knew his depression was more than unipolar (i.e., clinical depression by itself). The gifted art student had painted his naked body in elaborate geometric patterns and shown up at the fountain in front of his university’s grand administrative building during the middle of a sunny afternoon. He proceeded to dramatically quote Michel Foucault’s Madness and Civilization, even as he was carried away by campus security. The combination of SSRIs and mood stabilizers prescribed to Marty for the treatment of bipolar disor...</code> | <code>Understanding Schizoaffective Disorder Medication Understanding Schizoaffective Disorder Medication Because schizoaffective disorder has symptoms of both psychosis and a mood disorder, ✱ doctors often prescribe different medicines to treat different symptoms of the condition. For example, they may prescribe: An antipsychotic, which helps symptoms like delusions and hallucinations A mood-stabilizing medicine, which can help level out “highs” and “lows”An antidepressant, which can help feelings of sadness, hopelessness, and difficulty with sleep and concentration One medicine for schizoaffective disorder's symptoms INVEGA SUSTENNA ® treats the symptoms of schizoaffective disorder (psychosis and mood), so it may be possible for you to manage symptoms with one medicine if your doctor feels it’s right for you. And that means one less pill to think about every day. Approved for the treatment of schizophrenia and schizoaffective disorder.✱ Please discuss your symptoms with your healthcare pro...</code> | <code>Paranoia and schizophrenia: What you need to know Newsletter MNT - Hourly Medical News Since 2003Search Log in Newsletter MNT - Hourly Medical News Since 2003Search Login Paranoia and schizophrenia: What you need to know Last updated Thu 25 May 2017By Yvette Brazier Reviewed by Timothy J. Legg, Ph D, CRNPOverview Symptoms Causes Diagnosis Treatment Complications A person who has a condition on the schizophrenia spectrum may experience delusions and what is commonly known as paranoia. These delusions may give rise to fears that others are plotting against the individual. Everyone can have a paranoid thought from time to time. On a rough day, we may find ourselves saying "Oh boy, the whole world is out to get me!" But we recognize that this is not the case. People with paranoia often have an extensive network of paranoid thoughts and ideas. This can result in a disproportionate amount of time spent thinking up ways for the individual to protect themselves from their perceived persecutors...</code> | <code>Same Genes Suspected in Both Depression and Bipolar Illness Same Genes Suspected in Both Depression and Bipolar Illness Increased Risk May Stem From Variation in Gene On/Off Switch January 28, 2010 • Science Update Protein produced by PBRM1 gene Researchers, for the first time, have pinpointed a genetic hotspot that confers risk for both bipolar disorder and depression. People with either of these mood disorders were significantly more likely to have risk versions of genes at this site than healthy controls. One of the genes, which codes for part of a cell's machinery that tells genes when to turn on and off, was also found to be over-expressed in the executive hub of bipolar patients' brains, making it a prime suspect. The results add to mounting evidence that major mental disorders overlap at the molecular level. "People who carry the risk versions may differ in some dimension of brain development that may increase risk for mood disorders later in life," explained Francis Mc Mahon, M...</code> | <code>Schizophrenia Definition and Characteristics Schizophrenia Schizophrenia Definition and Characteristics Symptoms, Treatments and Risk Factors By Marcia Purse | Reviewed by Steven Gans, MDUpdated July 06, 2017Share Pin Email Print Kent Mathews/Stone/Getty Images Schizophrenia is a severe, lifelong mental disorder characterized by delusions, hallucinations, incoherence and physical agitation. It is classified as a thought disorder, while bipolar disorder is a mood disorder. Incidence and Risk Factors for Schizophrenia It is estimated that 1% of the world's population has schizophrenia. While there is evidence that genetic factors have a role in developing schizophrenia, environment may play a significant part as well. The Difference Between Bipolar Disorder and Schizophrenia While bipolar I disorder may include psychotic features similar to those found in schizophrenia during manic or depressive episodes, and bipolar II disorder during depressive episodes, schizophrenia does not include ...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `learning_rate`: 0.0002
- `num_train_epochs`: 2
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0002
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 5
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.0185 | 2 | 8.9197 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0370 | 4 | 8.4814 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0556 | 6 | 6.6919 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0741 | 8 | 5.2493 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0926 | 10 | 4.2792 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1111 | 12 | 3.4554 | 0.2385 | 0.3867 | 0.7209 | 0.3194 | 0.5207 | 0.4438 | 0.1702 | 0.3732 | 0.8791 | 0.2758 | 0.4377 | 0.4026 | 0.4623 | 0.4331 |
| 0.1296 | 14 | 3.0437 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1481 | 16 | 2.6133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1667 | 18 | 2.3395 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1852 | 20 | 2.1826 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2037 | 22 | 2.0498 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2222 | 24 | 1.9743 | 0.2706 | 0.4493 | 0.8104 | 0.4201 | 0.6036 | 0.5542 | 0.2249 | 0.5859 | 0.9221 | 0.3091 | 0.5671 | 0.5562 | 0.4864 | 0.5200 |
| 0.2407 | 26 | 1.9111 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2593 | 28 | 1.8534 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2778 | 30 | 1.8137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2963 | 32 | 1.7587 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3148 | 34 | 1.7124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3333 | 36 | 1.6841 | 0.2945 | 0.4652 | 0.8333 | 0.4352 | 0.6189 | 0.5619 | 0.2512 | 0.5977 | 0.9403 | 0.3322 | 0.5502 | 0.5778 | 0.4596 | 0.5321 |
| 0.3519 | 38 | 1.6765 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3704 | 40 | 1.6314 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3889 | 42 | 1.5989 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4074 | 44 | 1.592 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4259 | 46 | 1.572 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4444 | 48 | 1.5525 | 0.3045 | 0.4626 | 0.8526 | 0.4507 | 0.6275 | 0.5617 | 0.2575 | 0.5676 | 0.9406 | 0.3661 | 0.5666 | 0.5693 | 0.4231 | 0.5346 |
| 0.4630 | 50 | 1.51 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4815 | 52 | 1.5156 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5 | 54 | 1.5076 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5185 | 56 | 1.4781 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5370 | 58 | 1.4833 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5556 | 60 | 1.4576 | 0.3042 | 0.4727 | 0.8456 | 0.4578 | 0.6338 | 0.5599 | 0.2513 | 0.5883 | 0.9370 | 0.3792 | 0.5656 | 0.5229 | 0.4431 | 0.5355 |
| 0.5741 | 62 | 1.4402 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5926 | 64 | 1.438 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6111 | 66 | 1.4504 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6296 | 68 | 1.4142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6481 | 70 | 1.4141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6667 | 72 | 1.3917 | 0.3225 | 0.4697 | 0.8632 | 0.4529 | 0.6474 | 0.5575 | 0.2341 | 0.5942 | 0.9464 | 0.3846 | 0.5467 | 0.4924 | 0.4124 | 0.5326 |
| 0.6852 | 74 | 1.4108 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7037 | 76 | 1.4 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7222 | 78 | 1.385 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7407 | 80 | 1.3946 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7593 | 82 | 1.3762 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7778 | 84 | 1.3606 | 0.3325 | 0.4747 | 0.8730 | 0.4891 | 0.6511 | 0.5941 | 0.2530 | 0.5835 | 0.9452 | 0.3776 | 0.5490 | 0.4680 | 0.4447 | 0.5412 |
| 0.7963 | 86 | 1.3615 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8148 | 88 | 1.3811 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8333 | 90 | 1.3462 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8519 | 92 | 1.3617 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8704 | 94 | 1.3345 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8889 | 96 | 1.3291 | 0.3249 | 0.4780 | 0.8791 | 0.4925 | 0.6518 | 0.6018 | 0.2678 | 0.5981 | 0.9451 | 0.3799 | 0.5474 | 0.4423 | 0.4340 | 0.5418 |
| 0.9074 | 98 | 1.3253 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9259 | 100 | 1.3375 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9444 | 102 | 1.3177 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9630 | 104 | 1.3318 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9815 | 106 | 1.297 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0093 | 108 | 1.3128 | 0.3211 | 0.4761 | 0.8869 | 0.4904 | 0.6531 | 0.5906 | 0.2660 | 0.6035 | 0.9473 | 0.3810 | 0.5749 | 0.4420 | 0.4286 | 0.5432 |
| 1.0278 | 110 | 1.3088 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0463 | 112 | 1.3071 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0648 | 114 | 1.2936 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0833 | 116 | 1.2839 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.1019 | 118 | 1.2693 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.1204 | 120 | 1.291 | 0.3022 | 0.4793 | 0.8822 | 0.5117 | 0.6691 | 0.5708 | 0.2637 | 0.6140 | 0.9521 | 0.3913 | 0.5773 | 0.4487 | 0.4281 | 0.5454 |
| 1.1389 | 122 | 1.2636 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.1574 | 124 | 1.2427 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.1759 | 126 | 1.2167 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.1944 | 128 | 1.202 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.2130 | 130 | 1.1931 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.2315 | 132 | 1.178 | 0.2842 | 0.4731 | 0.8755 | 0.5114 | 0.6814 | 0.5611 | 0.2731 | 0.6122 | 0.9477 | 0.3926 | 0.5723 | 0.4647 | 0.4441 | 0.5457 |
| 1.25 | 134 | 1.1955 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.2685 | 136 | 1.18 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.2870 | 138 | 1.1771 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.3056 | 140 | 1.173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.3241 | 142 | 1.141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.3426 | 144 | 1.1531 | 0.2816 | 0.4822 | 0.9067 | 0.5164 | 0.6609 | 0.5758 | 0.2713 | 0.6295 | 0.9596 | 0.4018 | 0.5862 | 0.4615 | 0.4309 | 0.5511 |
| 1.3611 | 146 | 1.1608 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.3796 | 148 | 1.1489 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.3981 | 150 | 1.1531 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.4167 | 152 | 1.1391 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.4352 | 154 | 1.1405 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.4537 | 156 | 1.1336 | 0.3180 | 0.4810 | 0.8891 | 0.5077 | 0.6655 | 0.5609 | 0.2797 | 0.5979 | 0.9557 | 0.3988 | 0.6011 | 0.5093 | 0.4176 | 0.5525 |
| 1.4722 | 158 | 1.1165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.4907 | 160 | 1.1316 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.5093 | 162 | 1.1328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.5278 | 164 | 1.1229 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.5463 | 166 | 1.1312 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.5648 | 168 | 1.1112 | 0.2801 | 0.4865 | 0.9104 | 0.5040 | 0.6631 | 0.5666 | 0.2847 | 0.6059 | 0.9599 | 0.4003 | 0.5906 | 0.4927 | 0.4312 | 0.5520 |
| 1.5833 | 170 | 1.1304 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.6019 | 172 | 1.1257 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.6204 | 174 | 1.139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.6389 | 176 | 1.1116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.6574 | 178 | 1.1161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.6759 | 180 | 1.1024 | 0.2991 | 0.4822 | 0.9009 | 0.4886 | 0.6652 | 0.5659 | 0.2577 | 0.6147 | 0.9597 | 0.4051 | 0.5747 | 0.4585 | 0.4207 | 0.5456 |
| 1.6944 | 182 | 1.1239 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.7130 | 184 | 1.1266 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.7315 | 186 | 1.1154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.75 | 188 | 1.1382 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.7685 | 190 | 1.102 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.7870 | 192 | 1.1046 | 0.3107 | 0.4764 | 0.9040 | 0.4828 | 0.6680 | 0.5747 | 0.2625 | 0.5969 | 0.9567 | 0.3948 | 0.5801 | 0.4641 | 0.4313 | 0.5464 |
| 1.8056 | 194 | 1.1241 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.8241 | 196 | 1.1266 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.8426 | 198 | 1.1257 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.8611 | 200 | 1.1148 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.8796 | 202 | 1.1133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.8981 | 204 | 1.1149 | 0.2840 | 0.4733 | 0.9203 | 0.4901 | 0.6700 | 0.5747 | 0.2547 | 0.6061 | 0.9594 | 0.3972 | 0.5856 | 0.4572 | 0.4180 | 0.5454 |
| 1.9167 | 206 | 1.1122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.9352 | 208 | 1.1259 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.9537 | 210 | 1.1215 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.9722 | 212 | 1.1047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.9907 | 214 | 1.1166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
</details>
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0.dev0
- PyTorch: 2.6.0.dev20241112+cu121
- Accelerate: 1.2.1
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Nutanix/CodeLlama-7b-Instruct-hf_cpp_unit_tests_full_finetuning_class_level | Nutanix | "2024-09-15T18:13:55Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-15T18:11:20Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zelk12/MT1-Gen1-MU-gemma-2-Av4AMT1-9B | zelk12 | "2024-10-23T17:29:13Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B",
"base_model:zelk12/MT1-gemma-2-9B",
"base_model:merge:zelk12/MT1-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-23T17:22:48Z" | ---
base_model:
- zelk12/MT1-gemma-2-9B
- lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B)
* [lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
- model: zelk12/MT1-gemma-2-9B
merge_method: slerp
base_model: lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
dtype: bfloat16
parameters:
t: 0.5
```
|
ShotaMatsumoto/gpt0.2b_test | ShotaMatsumoto | "2024-09-23T11:35:06Z" | 126 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-23T07:08:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
furusu/PFG | furusu | "2023-03-28T00:39:43Z" | 0 | 22 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | "2023-02-18T00:41:46Z" | ---
license: apache-2.0
---
ここはPFGの重みファイル置き場です。[pfg](https://github.com/laksjdjf/pfg)を直接cloneしてgenerate.pyで使うか、[Colab](https://colab.research.google.com/github/laksjdjf/pfg/blob/main/pfg.ipynb)でとりあえず試すことができます。
# wd-v1-4-vit-tagger-v2-last-pooling-layer.onnx
最後のプール層を出力するためのonnxファイルです。[wd-v1-4-vit-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2)のonnxファイルから作成されています。
# wd14-n10
[waifu-diffusion](https://huggingface.co/hakurei/waifu-diffusion)を使って学習させたものです。トークン数は10個です。
SD2系のほかのモデル(v_predに関わらず)でもある程度使える可能性があります。
学習データは61万枚ですべて1girlを含んでいるので、複数キャラクターには対応できないかもしれません。
また版権キャラはあんまり再現できないと思います。元のモデルもそんなに再現できないので。好きなキャラを生成したい人はそれ用に学習してください。
生成例:
※著作権的なあれのため、入力した画像はwaifu-diffusionで作成したものです。そのため通常の画像よりも再現度が高いです。基本的にwaifu-diffusionで生成できない画像はこの手法でも生成できません。


# wd15beta2-n10
It is resumed from wd14-n10 for [waifu-diffusion-1.5-beta2](https://huggingface.co/waifu-diffusion/wd-1-5-beta2) on same dataset.
# Licence
このファイル単体で動くものではないので、利用する学習済みモデルのライセンスに従ってください。 |
tangg555/clip-vit-base-patch32-finetuned-openai-clip-vit-base-patch32-emnist-letter | tangg555 | "2024-09-12T23:01:04Z" | 61 | 0 | transformers | [
"transformers",
"safetensors",
"clip",
"image-classification",
"generated_from_trainer",
"base_model:openai/clip-vit-base-patch32",
"base_model:finetune:openai/clip-vit-base-patch32",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-09-06T18:28:47Z" | ---
library_name: transformers
base_model: openai/clip-vit-base-patch32
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clip-vit-base-patch32-finetuned-openai-clip-vit-base-patch32-emnist-letter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-base-patch32-finetuned-openai-clip-vit-base-patch32-emnist-letter
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1524
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.0859 | 0.9994 | 877 | 0.4055 | 0.8640 |
| 0.927 | 2.0 | 1755 | 0.3652 | 0.8782 |
| 0.83 | 2.9994 | 2632 | 0.2687 | 0.9066 |
| 0.7747 | 4.0 | 3510 | 0.2356 | 0.9189 |
| 0.7545 | 4.9994 | 4387 | 0.2147 | 0.9245 |
| 0.6461 | 6.0 | 5265 | 0.1889 | 0.9320 |
| 0.6457 | 6.9994 | 6142 | 0.1784 | 0.9354 |
| 0.6796 | 8.0 | 7020 | 0.1659 | 0.9412 |
| 0.5502 | 8.9994 | 7897 | 0.1548 | 0.9461 |
| 0.5797 | 9.9943 | 8770 | 0.1524 | 0.9465 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nblinh/aefa2701-ff89-4c9a-9edc-cf5ed8f363ac | nblinh | "2025-02-04T12:42:26Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T11:45:08Z" | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aefa2701-ff89-4c9a-9edc-cf5ed8f363ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 595157d5db82c2d7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/595157d5db82c2d7_train_data.json
type:
field_input: instrument_summary
field_instruction: mood
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/aefa2701-ff89-4c9a-9edc-cf5ed8f363ac
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/595157d5db82c2d7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2b9fa8bc-4f60-420d-8199-c24c6cda2894
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2b9fa8bc-4f60-420d-8199-c24c6cda2894
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aefa2701-ff89-4c9a-9edc-cf5ed8f363ac
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.874 | 0.0100 | 200 | 0.8194 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ankgoyal/rvt | ankgoyal | "2025-03-14T21:47:21Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-12-13T01:23:19Z" | ---
license: other
license_name: nvidia-source-code-license
license_link: https://github.com/NVlabs/RVT/blob/master/LICENSE
---
|
arcwarden46/94aaa5a7-2438-47aa-a214-b5c77a4c2290 | arcwarden46 | "2025-02-03T01:24:47Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] | null | "2025-02-03T01:20:01Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 94aaa5a7-2438-47aa-a214-b5c77a4c2290
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5b239afff048be33_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5b239afff048be33_train_data.json
type:
field_instruction: instruction
field_output: output_1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/94aaa5a7-2438-47aa-a214-b5c77a4c2290
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/5b239afff048be33_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3883aa00-6588-42b3-bf77-b1f07b299789
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3883aa00-6588-42b3-bf77-b1f07b299789
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 94aaa5a7-2438-47aa-a214-b5c77a4c2290
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7232 | 0.0020 | 1 | 1.7911 |
| 1.923 | 0.1020 | 50 | 1.6649 |
| 1.9019 | 0.2040 | 100 | 1.6398 |
| 1.8592 | 0.3060 | 150 | 1.6323 |
| 1.8223 | 0.4080 | 200 | 1.6299 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
netsol/resume-llama-3.1-8b-4bit | netsol | "2024-10-27T18:24:01Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-10-27T18:01:24Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** netsol
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
skrh/finetuning-sentiment-model-3000-samples | skrh | "2023-11-15T04:28:24Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-24T20:02:57Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8675496688741722
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3130
- Accuracy: 0.8667
- F1: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TinySuitStarfish/q-Taxi-v3 | TinySuitStarfish | "2022-06-06T00:23:40Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-06T00:23:34Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.48 +/- 2.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="TinySuitStarfish/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
mradermacher/AceGPT-v2-70B-Chat-i1-GGUF | mradermacher | "2024-11-16T04:59:10Z" | 16 | 0 | transformers | [
"transformers",
"gguf",
"ar",
"zh",
"en",
"base_model:FreedomIntelligence/AceGPT-v2-70B-Chat",
"base_model:quantized:FreedomIntelligence/AceGPT-v2-70B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-11-15T11:55:21Z" | ---
base_model: FreedomIntelligence/AceGPT-v2-70B-Chat
language:
- ar
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AceGPT-v2-70B-Chat-i1-GGUF/resolve/main/AceGPT-v2-70B-Chat.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/SJT-8B-V1.1-i1-GGUF | mradermacher | "2025-02-04T22:57:20Z" | 516 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Sakalti/SJT-8B-V1.1",
"base_model:quantized:Sakalti/SJT-8B-V1.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-04T22:07:00Z" | ---
base_model: Sakalti/SJT-8B-V1.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sakalti/SJT-8B-V1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SJT-8B-V1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q4_0.gguf) | i1-Q4_0 | 5.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q4_1.gguf) | i1-Q4_1 | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-8B-V1.1-i1-GGUF/resolve/main/SJT-8B-V1.1.i1-Q6_K.gguf) | i1-Q6_K | 7.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF | mradermacher | "2025-03-29T05:57:50Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:amadeusai/AV-FI-Qwen2.5-72B-PT-BR-Instruct",
"base_model:quantized:amadeusai/AV-FI-Qwen2.5-72B-PT-BR-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-29T04:22:40Z" | ---
base_model: amadeusai/AV-FI-Qwen2.5-72B-PT-BR-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/AV-FI-Qwen2.5-72B-PT-BR-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-72B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-72B-PT-BR-Instruct.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jonjew/JennaJameson | Jonjew | "2025-03-30T00:01:20Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-30T00:01:13Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Breathtaking over the shoulder shot photography of ohwx looking at viewer,
imperfections, necklace, looking at viewer, eyelashes, fine hair detail,
entire hairstyle visible, perfect eyes with iris pattern, sensual lips,
nose, (perfectly sharp:1.3), realistic textures, (deep focus:1.5), 8k uhd,
dslr, ultra high quality image, film grain, Fujifilm XT3
parameters:
negative_prompt: JennaJameson_flux_lora_v5_Weight-1.00
output:
url: >-
images/JennaJameson_flux_lora_v5_Weight-1.00_2025-02-25_2025-02-25-102827_0_03.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
license: unknown
---
# Jenna Jameson
<Gallery />
## Model description
FROM https://civitai.green/models/1310909/jenna-jameson-adult-film-actress90s?modelVersionId=1479473
Trigger ohwx
Strength 1.1
👍 *** If you love it, like it! ***👍
workflow: https://civitai.com/models/1088678
👑 Jenna Jameson (Adult film actress)(90s)🎬
***This lora do not reproduce very well all body part, but i'm working very hard to make it happen. ***
About my adult film actress celebrities loras
Dataset used to train this lora contain around 20-40% only face and the reste is the body. Doing that will sometime bring some deformation with hands and feet and other parts and thats normal as the lora try to bring element that fight well trained one in flux model. But still, generating multiple images will bring flawless results.
This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).
Trained with ai-toolkit, so merging it is not easy.
To get the best result
Guidance: 2.2-3
Steps (dev): 30-40
daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75
Resolution: Upscale the latent by 1.25 or 1.5 you'll get awsome result. (take longer time but worth it)
Trigger word is (may work better in certain context): ohwx
Enjoy!
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/JennaJameson/tree/main) them in the Files & versions tab.
|
rameye/PSmodel | rameye | "2024-04-19T08:32:42Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-04-19T08:32:42Z" | ---
license: other
license_name: privacy
license_link: LICENSE
---
|
Matej/bert-base-buddhist-sanskrit | Matej | "2022-04-15T08:54:45Z" | 20 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-04-13T11:37:54Z" | ---
tags:
- Buddhist Sanskrit
- BERT
- name: bert-base-buddhist-sanskrit
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-buddhist-sanskrit
The best performing model of the research described in the paper 'Embeddings models for Buddhist Sanskrit' published at LREC 2022 (Link to the paper will be added after
the publication of conference proceedings).
## Model description
The model has the bert-base architecture and configuration and was pretrained from scratch as a masked language model
on the Sanskrit reference corpus, and fine-tuned on the smaller corpus of Buddhist Sanskrit.
## How to use it
```
model = AutoModelForMaskedLM.from_pretrained("Matej/bert-base-buddhist-sanskrit")
tokenizer = AutoTokenizer.from_pretrained("Matej/bert-base-buddhist-sanskrit", use_fast=True)
```
## Intended uses & limitations
MIT license, no limitations
## Training and evaluation data
See the paper 'Embeddings models for Buddhist Sanskrit' for details on the corpora and the evaluation procedure.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 28
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300.0
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
mgmeskill/rl_course_vizdoom_health_gathering_supreme | mgmeskill | "2023-09-15T22:44:36Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-15T22:44:26Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.94 +/- 4.18
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mgmeskill/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mradermacher/BigFalcon3-36B-GGUF | mradermacher | "2025-01-14T12:22:14Z" | 308 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ehristoforu/BigFalcon3-36B",
"base_model:quantized:ehristoforu/BigFalcon3-36B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-13T00:54:24Z" | ---
base_model: ehristoforu/BigFalcon3-36B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ehristoforu/BigFalcon3-36B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BigFalcon3-36B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q2_K.gguf) | Q2_K | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q3_K_S.gguf) | Q3_K_S | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q3_K_M.gguf) | Q3_K_M | 8.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q3_K_L.gguf) | Q3_K_L | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.IQ4_XS.gguf) | IQ4_XS | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q4_K_S.gguf) | Q4_K_S | 10.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q4_K_M.gguf) | Q4_K_M | 11.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q5_K_S.gguf) | Q5_K_S | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q5_K_M.gguf) | Q5_K_M | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q6_K.gguf) | Q6_K | 14.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BigFalcon3-36B-GGUF/resolve/main/BigFalcon3-36B.Q8_0.gguf) | Q8_0 | 19.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tomvoelker/roberta2roberta-roberta-base-cnn-dailymail-seed42 | tomvoelker | "2025-03-14T03:10:03Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-13T14:38:24Z" | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: roberta2roberta-roberta-base-cnn-dailymail-seed42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta2roberta-roberta-base-cnn-dailymail-seed42
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9434
- Rouge1: 0.4156
- Rouge2: 0.1935
- Rougel: 0.2864
- Rougelsum: 0.3925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.7104 | 0.2229 | 2000 | 3.3305 | 0.2437 | 0.0462 | 0.1606 | 0.2276 |
| 3.0905 | 0.4458 | 4000 | 2.7549 | 0.3268 | 0.1007 | 0.2079 | 0.3065 |
| 2.69 | 0.6687 | 6000 | 2.4043 | 0.3811 | 0.1543 | 0.2483 | 0.3590 |
| 2.5072 | 0.8916 | 8000 | 2.2511 | 0.3927 | 0.1701 | 0.2614 | 0.3703 |
| 2.3162 | 1.1145 | 10000 | 2.1651 | 0.3982 | 0.1758 | 0.2682 | 0.3756 |
| 2.2608 | 1.3374 | 12000 | 2.1016 | 0.4019 | 0.1802 | 0.2717 | 0.3794 |
| 2.2161 | 1.5603 | 14000 | 2.0631 | 0.4082 | 0.1878 | 0.2789 | 0.3855 |
| 2.1959 | 1.7832 | 16000 | 2.0262 | 0.4073 | 0.1863 | 0.2794 | 0.3854 |
| 2.1558 | 2.0061 | 18000 | 2.0091 | 0.4111 | 0.1890 | 0.2815 | 0.3884 |
| 2.0484 | 2.2290 | 20000 | 1.9882 | 0.4130 | 0.1914 | 0.2836 | 0.3898 |
| 2.0264 | 2.4519 | 22000 | 1.9743 | 0.4143 | 0.1930 | 0.2857 | 0.3915 |
| 2.0051 | 2.6748 | 24000 | 1.9534 | 0.4140 | 0.1918 | 0.2848 | 0.3912 |
| 2.0068 | 2.8977 | 26000 | 1.9434 | 0.4156 | 0.1935 | 0.2864 | 0.3925 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Subsets and Splits