modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 18:59:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 551
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 18:27:33
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
John6666/titania-noah-mix-25d27d-v80-sdxl
|
John6666
| 2024-09-12T23:31:50Z | 226 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"cosplay",
"boobs",
"2.5D",
"2.7D",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-12T23:11:35Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- cosplay
- boobs
- 2.5D
- 2.7D
- pony
---
Original model is [here](https://civitai.com/models/530139/titanianoahmix-25d-27d?modelVersionId=833712).
This model created by [XXXNOAHXXX](https://civitai.com/user/XXXNOAHXXX).
|
John6666/copycat-v31-sdxl
|
John6666
| 2024-09-12T23:31:49Z | 168 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2D",
"sd15 style",
"cute",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-12T23:08:22Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2D
- sd15 style
- cute
- pony
base_model: tsukihara/xl_model
---
Original model is [here](https://civitai.com/models/316174/copycat?modelVersionId=835096).
This model created by [calculater](https://civitai.com/user/calculater).
|
John6666/wai-ani-hentai-pony-v5-sdxl
|
John6666
| 2024-09-12T23:24:02Z | 5,898 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-12T23:12:21Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/553648/wai-ani-hentai-ponyxl?modelVersionId=834110).
This model created by [WAI0731](https://civitai.com/user/WAI0731).
|
John6666/speciosa-anime-v14-sdxl
|
John6666
| 2024-09-12T23:21:31Z | 92 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"2.5D",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-12T23:13:44Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- 2.5D
- pony
---
Original model is [here](https://civitai.com/models/687637/speciosa-anime?modelVersionId=808821).
This model created by [Oraculum](https://civitai.com/user/Oraculum).
|
ArtisanLabs/ArtisanXL
|
ArtisanLabs
| 2024-09-12T23:20:04Z | 7 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"text-to-image",
"art",
"fine-art",
"en",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-11T15:47:14Z |
---
license: openrail++
language:
- en
tags:
- stable-diffusion-xl
- text-to-image
- art
- fine-art
inference: true
---
<style>
.artisan-title {
font-family: Arial, sans-serif;
font-size: 3em;
text-align: center;
color: #000;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
margin: 0;
}
.image-grid {
display: flex;
flex-wrap: wrap;
margin-bottom: 20px;
}
.image-grid img {
width: 50%;
height: auto;
display: block;
margin: 0 !important; /* Override any default margins */
padding: 0;
}
.image-grid .wide-image {
width: 100%;
}
/* Override Hugging Face's default styling */
.prose :where(img):not(:where([class~=not-prose],[class~=not-prose] *)) {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
</style>
<h1 class="artisan-title">Artisan XL</h1>
<div class="image-grid">
<img src="images/Picasso.webp" alt="Square image 1">
<img src="images/Repin.webp" alt="Square image 2">
<img src="images/Van Gogh.webp" alt="Square image 3">
<img src="images/Vermeer.webp" alt="Square image 4">
<img src="images/long_van_gogh.webp" alt="Wide image" class="wide-image">
</div>
# Artisan XL: Unparalleled Fine Art Generation
Artisan XL is a state-of-the-art text-to-image model that pushes the boundaries of AI-generated fine art. Trained on an extensive dataset of 200,000 high-resolution oil paintings, this model offers unparalleled style fidelity for a wide range of artistic styles throughout human history.
## Key Features
- Trained for 1200 A100 hours on 8 A100 GPUs
- Captioned with Claude Sonnet 3.5 for precise style understanding
- Exceptional style fidelity for artists like Picasso, Van Gogh, Rembrandt, Klimt, Repin, Serov, Kandinsky, and many more
- Supports all basic SDXL tools and community techniques
## Usage Tips
1. **Prompt Engineering**: We recommend using long, detailed prompts with weighting. Ideally, generate prompts using Claude Sonnet or Haiku for best results.
2. **Negative Prompts**: Use empty or very short negative prompts. Common negative keywords include: `simplified, rough, blurred, crude, imperfections, sketch`
3. **Settings**:
- Works well with guidance limiter
- Recommended CFG scale: 3.0-5.5
## Model Variants
1. **Full Model**: Complete SDXL fine-tuned model
2. **LoRA Variant**:
- Linear and Conv modules
- Rank 128
- Provides 95% of full model capabilities
### Multi Aspect Resolution
This model supports generating images at the following dimensions:
| Dimensions | Aspect Ratio |
|-------------------|-----------------|
| `1024 x 1024` | 1:1 Square |
| `1152 x 896` | 9:7 |
| `896 x 1152` | 7:9 |
| `1216 x 832` | 19:13 |
| `832 x 1216` | 13:19 |
| `1344 x 768` | 7:4 Horizontal |
| `768 x 1344` | 4:7 Vertical |
| `1536 x 640` | 12:5 Horizontal |
| `640 x 1536` | 5:12 Vertical |
### Acknowledgements
The development and release of Artisan XL would not have been possible without the invaluable contributions and support from the following individuals and organizations:
- **[Kohya SS](https://github.com/kohya-ss)**: For training scripts.
## Limitations
While Artisan XL offers exceptional capabilities in fine art generation, it's important to note:
1. The model specializes in fine art styles and may not perform as well for other genres.
2. Results can vary based on prompt quality and settings.
3. As with all AI models, occasional unexpected outputs may occur.
## Contact
For any inquiries, collaborations, or custom training requests, please contact:
[email protected]
We're excited to see what you create with Artisan XL! Happy art-making!
|
mradermacher/Phi-3.5-mini-dare_ties-GGUF
|
mradermacher
| 2024-09-12T23:13:09Z | 90 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-mini-dare_ties",
"base_model:quantized:bunnycore/Phi-3.5-mini-dare_ties",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T18:55:05Z |
---
base_model: bunnycore/Phi-3.5-mini-dare_ties
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Phi-3.5-mini-dare_ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-dare_ties-GGUF/resolve/main/Phi-3.5-mini-dare_ties.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cuneytkaya/fine-tuned-t5-small-turkish-mmlu
|
cuneytkaya
| 2024-09-12T23:03:22Z | 16 | 1 | null |
[
"safetensors",
"t5",
"tr",
"dataset:alibayram/turkish_mmlu",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"region:us"
] | null | 2024-09-12T22:06:04Z |
---
license: apache-2.0
datasets:
- alibayram/turkish_mmlu
language:
- tr
base_model:
- google-t5/t5-small
---
# fine-tuned-t5-small-turkish-mmlu
<!-- Provide a quick summary of what the model is/does. -->
The fine-tuned [T5-Small](https://huggingface.co/google-t5/t5-small) model is a question-answering model trained on the [Turkish MMLU](https://huggingface.co/datasets/alibayram/turkish_mmlu) dataset, which consists of questions from various academic and professional exams in Turkey, including KPSS and TUS. The model takes a Turkish question as input and generates the correct answer. It is designed to perform well on Turkish-language question-answering tasks, leveraging the structure of the T5 architecture to handle text-to-text transformations.
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
@dataset{bayram_2024_13378019,
author = {Bayram, M. Ali},
title = {{Turkish MMLU: Yapay Zeka ve Akademik Uygulamalar
İçin En Kapsamlı ve Özgün Türkçe Veri Seti}},
month = aug,
year = 2024,
publisher = {Zenodo},
version = {v1.2},
doi = {10.5281/zenodo.13378019},
url = {https://doi.org/10.5281/zenodo.13378019}
}
#### Training Hyperparameters
learning_rate=5e-5
per_device_train_batch_size=8
per_device_eval_batch_size=8
num_train_epochs=3
weight_decay=0.01
#### Training Results

#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Training loss was monitored to evaluate how well the model is learning and to avoid overfitting. In this case, after 3 epochs, the model achieved a training loss of 0.0749, reflecting its ability to generalize well to the given data.
|
ihughes15234/phi_3_5_mini_20k_con4_bthrough
|
ihughes15234
| 2024-09-12T22:55:16Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T22:50:31Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jott1970/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S-GGUF
|
jott1970
| 2024-09-12T22:44:03Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"de",
"base_model:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1",
"base_model:quantized:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-12T22:43:37Z |
---
base_model: DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1
language:
- de
library_name: transformers
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# jott1970/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S-GGUF
This model was converted to GGUF format from [`DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1`](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jott1970/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-q5_k_s-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jott1970/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-q5_k_s-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jott1970/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-q5_k_s-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jott1970/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-Q5_K_S-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-q5_k_s-imat.gguf -c 2048
```
|
bunnycore/LLama-3.1-8B-HyperNova-abliteration
|
bunnycore
| 2024-09-12T22:31:27Z | 15 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:bunnycore/LLama-3.1-8B-HyperNova",
"base_model:merge:bunnycore/LLama-3.1-8B-HyperNova",
"base_model:grimjim/Llama-3-Instruct-abliteration-LoRA-8B",
"base_model:merge:grimjim/Llama-3-Instruct-abliteration-LoRA-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T22:27:30Z |
---
base_model:
- bunnycore/LLama-3.1-8B-HyperNova
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method using [bunnycore/LLama-3.1-8B-HyperNova](https://huggingface.co/bunnycore/LLama-3.1-8B-HyperNova) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: bunnycore/LLama-3.1-8B-HyperNova+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
dtype: bfloat16
merge_method: passthrough
models:
- model: bunnycore/LLama-3.1-8B-HyperNova+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
```
|
mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF
|
mradermacher
| 2024-09-12T22:29:11Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-Mini-Sonet-RP",
"base_model:quantized:bunnycore/Phi-3.5-Mini-Sonet-RP",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T18:38:32Z |
---
base_model: bunnycore/Phi-3.5-Mini-Sonet-RP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Phi-3.5-Mini-Sonet-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-Sonet-RP-GGUF/resolve/main/Phi-3.5-Mini-Sonet-RP.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlc-ai/snowflake-arctic-embed-s-q0f32-MLC
|
mlc-ai
| 2024-09-12T22:20:11Z | 933 | 0 |
mlc-llm
|
[
"mlc-llm",
"web-llm",
"base_model:Snowflake/snowflake-arctic-embed-s",
"base_model:quantized:Snowflake/snowflake-arctic-embed-s",
"region:us"
] | null | 2024-08-12T05:42:29Z |
---
library_name: mlc-llm
base_model: Snowflake/snowflake-arctic-embed-s
tags:
- mlc-llm
- web-llm
---
# snowflake-arctic-embed-s-q0f32-MLC
This is the [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) model in MLC format `q0f32`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/Hermes-3-Llama-3.1-8B-q4f32_1-MLC
|
mlc-ai
| 2024-09-12T22:19:34Z | 142 | 0 |
mlc-llm
|
[
"mlc-llm",
"web-llm",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:quantized:NousResearch/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2024-09-10T16:22:47Z |
---
library_name: mlc-llm
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- mlc-llm
- web-llm
---
# Hermes-3-Llama-3.1-8B-q4f32_1-MLC
This is the [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) model in MLC format `q4f32_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/Hermes-3-Llama-3.1-8B-q4f32_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/Hermes-3-Llama-3.1-8B-q4f32_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/Hermes-3-Llama-3.1-8B-q4f32_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mradermacher/Phi-3.5-Mini-ChatML-GGUF
|
mradermacher
| 2024-09-12T22:14:06Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-Mini-ChatML",
"base_model:quantized:bunnycore/Phi-3.5-Mini-ChatML",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T21:06:40Z |
---
base_model: bunnycore/Phi-3.5-Mini-ChatML
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Phi-3.5-Mini-ChatML
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-Mini-ChatML-GGUF/resolve/main/Phi-3.5-Mini-ChatML.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ByteBanter/llama-3-8b-Instruct-bnb-4bit-FAQ-finetuned
|
ByteBanter
| 2024-09-12T22:02:56Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T21:51:10Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** ByteBanter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bunnycore/LLama-3.1-8B-HyperNova
|
bunnycore
| 2024-09-12T22:01:05Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:DreadPoor/Heart_Stolen-8B-Model_Stock",
"base_model:merge:DreadPoor/Heart_Stolen-8B-Model_Stock",
"base_model:bunnycore/HyperLLama3.1-8b-Nova",
"base_model:merge:bunnycore/HyperLLama3.1-8b-Nova",
"base_model:bunnycore/LLama-3.1-8b-Ultra-Max-Pro",
"base_model:merge:bunnycore/LLama-3.1-8b-Ultra-Max-Pro",
"base_model:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:merge:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:merge:meta-llama/Llama-3.1-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T21:56:27Z |
---
base_model:
- meta-llama/Meta-Llama-3.1-8B
- bunnycore/HyperLLama3.1-8b-Nova
- bunnycore/LLama-3.1-8b-Ultra-Max-Pro
- DreadPoor/Heart_Stolen-8B-Model_Stock
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
- Replete-AI/Replete-LLM-V2-Llama-3.1-8b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [bunnycore/HyperLLama3.1-8b-Nova](https://huggingface.co/bunnycore/HyperLLama3.1-8b-Nova)
* [bunnycore/LLama-3.1-8b-Ultra-Max-Pro](https://huggingface.co/bunnycore/LLama-3.1-8b-Ultra-Max-Pro)
* [DreadPoor/Heart_Stolen-8B-Model_Stock](https://huggingface.co/DreadPoor/Heart_Stolen-8B-Model_Stock)
* [grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B)
* [Replete-AI/Replete-LLM-V2-Llama-3.1-8b](https://huggingface.co/Replete-AI/Replete-LLM-V2-Llama-3.1-8b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: bunnycore/LLama-3.1-8b-Ultra-Max-Pro
- model: DreadPoor/Heart_Stolen-8B-Model_Stock
- model: bunnycore/HyperLLama3.1-8b-Nova
- model: Replete-AI/Replete-LLM-V2-Llama-3.1-8b
- model: grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
merge_method: model_stock
base_model: meta-llama/Meta-Llama-3.1-8B
dtype: bfloat16
```
|
VirgiF/results_pretrain_gemma_more_pause
|
VirgiF
| 2024-09-12T21:59:15Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T21:55:30Z |
---
library_name: transformers
license: gemma
base_model: google/gemma-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results_pretrain_gemma_more_pause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_pretrain_gemma_more_pause
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Graziela/videomae-base-finetuned-ucf101-subset
|
Graziela
| 2024-09-12T21:54:29Z | 64 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-08-12T13:40:24Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2985
- Accuracy: 0.4143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3254 | 1.0 | 300 | 1.2985 | 0.4143 |
### Framework versions
- Transformers 4.44.2
- Pytorch 1.12.0+cu102
- Datasets 3.0.0
- Tokenizers 0.19.1
|
ShushantLLM/LLama_music_generator
|
ShushantLLM
| 2024-09-12T21:53:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"trl",
"sft",
"missing lyric Llama2 1",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-15T02:47:21Z |
---
library_name: transformers
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- missing lyric Llama2 1
- generated_from_trainer
- missing lyric Llama2 1
datasets:
- generator
model-index:
- name: LLama_music_generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLama_music_generator
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.04
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Siddartha10/outputs_dpo
|
Siddartha10
| 2024-09-12T21:39:04Z | 125 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:Siddartha10/epoch_1",
"base_model:finetune:Siddartha10/epoch_1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T21:38:44Z |
---
library_name: transformers
license: apache-2.0
base_model: Siddartha10/epoch_1
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: outputs_dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs_dpo
This model is a fine-tuned version of [Siddartha10/epoch_1](https://huggingface.co/Siddartha10/epoch_1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Yntec/darelitesFantasyMix
|
Yntec
| 2024-09-12T21:38:57Z | 3,447 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"Beautiful",
"Fantasy",
"Semi-Realistic",
"DareLite",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-12T16:55:59Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Beautiful
- Fantasy
- Semi-Realistic
- DareLite
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# DareLight's Fantasy Mix
This model with the kl-f8-anime2 VAE baked in for improved details. Original page: https://civitai.com/models/11482/darelites-fantasy-mix - Comparison:

(Click for larger)
Samples and prompts:

glow, Cartoon Pretty Girl, sitting on a box of bottles, DETAILED ANIME BROWN EYES, holding LEATHER JACKET, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza

a Cooking of a beautiful young cute girl

digital painting, anime, trending on artstation close up of pretty cute asian girl, centered, (messy bun), brown eyes, pale skin, behind trees, (high detailed skin:1.2), beach, Fujifilm XT3, (high detailed face:1.3)

(digital painting:1.3), cartoon, trending on artstation, close up of pretty cute Swedish girl, centered, (messy bun), beautiful brown eyes, pale skin, behind mountains, snow, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3)
|
FabioTiroli/LaminiFT3
|
FabioTiroli
| 2024-09-12T21:10:15Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:lamini/lamini_docs_finetuned",
"base_model:finetune:lamini/lamini_docs_finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T18:11:06Z |
---
library_name: transformers
license: apache-2.0
base_model: lamini/lamini_docs_finetuned
tags:
- generated_from_trainer
model-index:
- name: LaminiFT3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LaminiFT3
This model is a fine-tuned version of [lamini/lamini_docs_finetuned](https://huggingface.co/lamini/lamini_docs_finetuned) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 2.21.0
- Tokenizers 0.19.1
|
morturr/Llama-2-7b-hf-dadjokes-2024-09-12
|
morturr
| 2024-09-12T21:02:41Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-09-11T21:38:00Z |
---
base_model: meta-llama/Llama-2-7b-hf
library_name: peft
license: llama2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-dadjokes-2024-09-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-dadjokes-2024-09-12
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 150
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Igniter909/xlm-roberta-base-finetuned_panx_de
|
Igniter909
| 2024-09-12T20:56:34Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T19:37:11Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned_panx_de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned_panx_de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.0 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.0 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
SongTonyLi/gemma-2b-it-SFT-D1_chosen-then-D2_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge
|
SongTonyLi
| 2024-09-12T20:52:08Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T20:49:19Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
passthepizza/NarrativAI-Reflection
|
passthepizza
| 2024-09-12T20:46:27Z | 10 | 0 | null |
[
"safetensors",
"llama",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:mit",
"region:us"
] | null | 2024-09-11T19:19:13Z |
---
license: mit
language:
- en
base_model:
- meta-llama/Meta-Llama-3.1-8B
---
<img src="https://i.ibb.co/WWXXQ89/662e2c140acef96920e91b66-trace-1-1.png" alt="Mhm." width="300"/>
# NarrativAI Reflection
**Model Name:** NarrativAI Reflection
**Base Model:** Llama 3.1 8B
<img src="https://media.discordapp.net/attachments/1283889722384580640/1283889723189755965/1qwHDiM.png?ex=66e4a2fb&is=66e3517b&hm=37b0208edb7d652147914a436ab9e5b8c75095fcc65298ac961f3a134db78f73&=&format=webp&quality=lossless&width=969&height=669" alt="Mhm." width="800"/>
**Model Description:** NarrativAI Reflection is a fine-tuned language model based on Llama 3.1 8B designed to generate internal monologue (pre-response thought, post-response reflection) and dialogue for roleplay actions.
**Training Objective:** The model was fine-tuned on a dataset containing prompts describing fictional scenarios with characters and their emotional states, along with corresponding responses that include pre-response thoughts, dialogue/action, and post-response reflections. The objective is to enable the model to generate believable internal monologue and dialogue that reflects the character's personality, emotions, and the context of the scenario.
**Intended Use:** NarrativAI Reflection is intended for creative writing and role-playing applications.
**Limitations:**
* **Limited context:** The model's understanding of the scenario is based solely on the provided prompt. It may struggle with complex scenarios or require additional context to generate accurate responses.
* **Bias and Stereotypes:** The training data may contain biases and stereotypes, which could be reflected in the model's generated text.
* **Lack of real-world knowledge:** The model's knowledge is limited to the information present in the training data. It may not be able to accurately reflect real-world situations or events.
**Disclaimer:**
NarrativAI Reflection is a research project and should not be used in any application where its limitations could cause harm or misrepresent real-world situations. Users are solely responsible for the ethical and responsible use of the model.
**Community Support:** [NarrativAI Discord](https://discord.gg/QDS4Nng6j6)
<img src="https://media1.tenor.com/m/TE8p01zc7zgAAAAC/rich-amiri-rich-amiri-one-call.gif" alt="Mhm." width="800"/>
|
mbrhan/sdxl-vae-fp16-fix
|
mbrhan
| 2024-09-12T20:38:47Z | 14 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:mit",
"region:us"
] | null | 2024-09-12T20:25:30Z |
---
license: mit
tags:
- stable-diffusion
- stable-diffusion-diffusers
inference: false
---
|
jbjeong91/llama3.1-cpo-full-0912
|
jbjeong91
| 2024-09-12T20:35:54Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"cpo",
"generated_from_trainer",
"conversational",
"dataset:princeton-nlp/llama3-ultrafeedback",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T17:38:33Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- alignment-handbook
- trl
- cpo
- generated_from_trainer
- trl
- cpo
- generated_from_trainer
datasets:
- princeton-nlp/llama3-ultrafeedback
model-index:
- name: llama3.1-cpo-full-0912
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3.1-cpo-full-0912
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the princeton-nlp/llama3-ultrafeedback dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5985
- Rewards/chosen: -15.4365
- Rewards/rejected: -16.1367
- Rewards/accuracies: 0.6239
- Rewards/margins: 0.7002
- Logps/rejected: -161.3668
- Logps/chosen: -154.3647
- Logits/rejected: -0.3853
- Logits/chosen: -0.4112
- Nll Loss: 0.4210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|
| 1.9362 | 0.2311 | 100 | 1.7930 | -14.9339 | -15.2848 | 0.5761 | 0.3508 | -152.8475 | -149.3394 | -0.4123 | -0.4378 | 0.4067 |
| 1.7019 | 0.4623 | 200 | 1.6786 | -15.4303 | -16.0131 | 0.6087 | 0.5828 | -160.1311 | -154.3027 | -0.3358 | -0.3580 | 0.4193 |
| 1.6388 | 0.6934 | 300 | 1.6233 | -15.5465 | -16.2127 | 0.6130 | 0.6662 | -162.1269 | -155.4650 | -0.3582 | -0.3828 | 0.4230 |
| 1.632 | 0.9246 | 400 | 1.6007 | -15.6505 | -16.3448 | 0.6370 | 0.6943 | -163.4479 | -156.5048 | -0.3811 | -0.4072 | 0.4277 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
SongTonyLi/gemma-2b-it-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge
|
SongTonyLi
| 2024-09-12T20:24:32Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T20:21:46Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leap-llm/Meta-Llama-3-8B-Instruct-kto-alfworld-lr5e-7-bt0.01-ep1-iter1
|
leap-llm
| 2024-09-12T20:13:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T20:08:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jott1970/code-llama-3-1-8b-text-to-sql-Q6_K-GGUF
|
jott1970
| 2024-09-12T20:07:51Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:philschmid/code-llama-3-1-8b-text-to-sql",
"base_model:quantized:philschmid/code-llama-3-1-8b-text-to-sql",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T20:07:21Z |
---
base_model: philschmid/code-llama-3-1-8b-text-to-sql
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# jott1970/code-llama-3-1-8b-text-to-sql-Q6_K-GGUF
This model was converted to GGUF format from [`philschmid/code-llama-3-1-8b-text-to-sql`](https://huggingface.co/philschmid/code-llama-3-1-8b-text-to-sql) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/philschmid/code-llama-3-1-8b-text-to-sql) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jott1970/code-llama-3-1-8b-text-to-sql-Q6_K-GGUF --hf-file code-llama-3-1-8b-text-to-sql-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jott1970/code-llama-3-1-8b-text-to-sql-Q6_K-GGUF --hf-file code-llama-3-1-8b-text-to-sql-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jott1970/code-llama-3-1-8b-text-to-sql-Q6_K-GGUF --hf-file code-llama-3-1-8b-text-to-sql-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jott1970/code-llama-3-1-8b-text-to-sql-Q6_K-GGUF --hf-file code-llama-3-1-8b-text-to-sql-q6_k.gguf -c 2048
```
|
doaonduty/llama-3.1-8b-instruct-gguf
|
doaonduty
| 2024-09-12T19:57:16Z | 17 | 0 | null |
[
"gguf",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-08T22:48:12Z |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
---
## Model Details
GGUF build of Meta-Llama-3.1-8B-Instruct model
### Model Description
- **Model Creator:** Meta-llama
- **Original Model:** Meta-Llama-3.1-8B-Instruct
|
AnasAber/seamless-darija-eng
|
AnasAber
| 2024-09-12T19:47:40Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"seamless_m4t_v2",
"text2text-generation",
"darija",
"moroccan_darija",
"translation",
"seamless",
"text-generation-inference",
"Machine translation",
"MA",
"NLP",
"en",
"ar",
"dataset:AnasAber/DoDA_sentences_darija_english",
"dataset:HANTIFARAH/cleaned_subtitles_all_videos2",
"base_model:facebook/seamless-m4t-v2-large",
"base_model:finetune:facebook/seamless-m4t-v2-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-12T19:22:54Z |
---
library_name: transformers
tags:
- darija
- moroccan_darija
- translation
- seamless
- text-generation-inference
- Machine translation
- MA
- NLP
datasets:
- AnasAber/DoDA_sentences_darija_english
- HANTIFARAH/cleaned_subtitles_all_videos2
language:
- en
- ar
base_model:
- facebook/seamless-m4t-v2-large
pipeline_tag: text2text-generation
---
# Seamless Enhanced Darija-English Translation Model
## Model Details
- **Model Name**: seamless-darija-eng
- **Base Model**: facebook/seamless-m4t-v2-large
- **Model Type**: Fine-tuned translation model
- **Languages**: Moroccan Arabic (Darija) ↔ English
- **Developer**: Anas ABERCHIH
## Model Description
This model is a fine-tuned version of Facebook's Seamless large m4t-v2 model, specifically optimized for translation between Moroccan Arabic (Darija) and English.
It leverages the power of the base Seamless model while being tailored for the nuances of Darija, making it particularly effective for Moroccan Arabic to English translations and vice versa.
### Training Data
The model was trained on two datasets.
First on a dataset of 40,000 sentence pairs:
Training set: 32,780 pairs
Validation set: 5,785 pairs
Test set: 6,806 pairs
And second, on a dataset of 82,332 sentence pairs:
- Training set: 59,484 pairs
- Validation set: 10,498 pairs
- Test set: 12,350 pairs
Each entry in the dataset contains:
- Darija text (Arabic script)
- English translation
### Training Procedure
- **Training Duration**: Approximately 9 hours
- **Number of Epochs**: 5
## Intended Use
This model is intended to be used directly for translating text from Moroccan Arabic (Darija) to English.
It can be further fine-tuned, and deployed in various applications requiring translation services.
This version is more capable than the original model in Darija to English translation.
### Direct Use
This model is designed for:
1. Translating Moroccan Arabic (Darija) text to English
2. Translating English text to Moroccan Arabic (Darija)
It can be particularly useful for:
- Localization of content for Moroccan audiences
- Cross-cultural communication between Darija speakers and English speakers
- Assisting in the understanding of Moroccan social media content, informal writing, or dialect-heavy texts
### Downstream Use
The model can be integrated into various applications, such as:
- Machine translation systems focusing on Moroccan content
- Chatbots or virtual assistants for Moroccan users
- Content analysis tools for Moroccan social media or web content
- Educational tools for language learners (both Darija and English)
## Limitations and Bias
The model's performance may be influenced by biases present in the training data, such as the representation of certain dialectal variations or cultural nuances.
Additionally, the model's accuracy may vary depending on the complexity of the text being translated and the presence of out-of-vocabulary words.
### Out-of-Scope Use
This model should not be used for:
1. Legal or medical translations where certified human translators are required
2. Translating other Arabic dialects or Modern Standard Arabic (MSA) to English (or vice versa)
3. Understanding or generating spoken language directly (it's designed for text)
### Recommendations
- Always review the output for critical applications, especially when dealing with nuanced or context-dependent content
- Be aware that the model may not capture all regional variations within Moroccan Arabic
- For formal or professional content, consider post-editing by a human translator
## How to Get Started
To use this model:
1. Install the Transformers library:
```
pip install transformers
```
2. Load the model and tokenizer:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "AnasAber/seamless-darija-eng"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
3. Translate text:
```python
def translate(text, src_lang, tgt_lang):
inputs = tokenizer(text, return_tensors="pt")
translated = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang])
return tokenizer.batch_decode(translated, skip_special_tokens=True)[0]
# Darija to English
darija_text = "كيفاش نقدر نتعلم الإنجليزية بسرعة؟"
english_translation = translate(darija_text, src_lang="ary", tgt_lang="eng")
print(english_translation)
# English to Darija
english_text = "How can I learn English quickly?"
darija_translation = translate(english_text, src_lang="eng", tgt_lang="ary")
print(darija_translation)
```
Remember to handle exceptions and implement proper error checking in production environments.
## Ethical Considerations
- Respect privacy and data protection laws when using this model with user-generated content
- Be aware of potential biases in the training data that may affect translations
- Use the model responsibly and avoid applications that could lead to discrimination or harm
## Contact Information
For questions, citations, or feedback about this model, please contact Anas ABERCHIH at ![https://www.linkedin.com/in/anas-aberchih-%F0%9F%87%B5%F0%9F%87%B8-b6007121b/] or my linked github account.
|
llm-wizard/llama38binstruct_summarize
|
llm-wizard
| 2024-09-12T19:45:25Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-06-11T22:24:09Z |
---
base_model: NousResearch/Meta-Llama-3-8B-Instruct
datasets:
- generator
library_name: peft
license: other
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama38binstruct_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama38binstruct_summarize
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 30
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.55 | 1.3158 | 25 | 1.5000 |
| 0.5276 | 2.6316 | 50 | 1.7814 |
| 0.2099 | 3.9474 | 75 | 1.8811 |
| 0.0761 | 5.2632 | 100 | 2.1068 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
aztro/mabama-flux
|
aztro
| 2024-09-12T19:39:51Z | 929 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2024-09-12T19:37:40Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: photo of mabama a beautiful woman
parameters:
negative_prompt: Low quality
output:
url: images/_bb99d340-5f2d-45dd-914f-21dadb44c7b1.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mabama
license: mit
---
# mabama-flux
<Gallery />
## Model description
Modelo FLux
## Trigger words
You should use `mabama` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aztro/mabama-flux/tree/main) them in the Files & versions tab.
|
AmeerH/FPT_FineTune_alpaca_v2
|
AmeerH
| 2024-09-12T19:34:14Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T18:04:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fsicoli/whisper-medium-pt-cv16-fleurs2
|
fsicoli
| 2024-09-12T19:28:30Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fsicoli/cv16-fleurs",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-09T17:41:15Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- fsicoli/cv16-fleurs
metrics:
- wer
model-index:
- name: whisper-medium-pt-cv16-fleurs2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fsicoli/cv16-fleurs default
type: fsicoli/cv16-fleurs
args: default
metrics:
- name: Wer
type: wer
value: 0.09492975940578072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-pt-cv16-fleurs2
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the fsicoli/cv16-fleurs default dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1428
- Wer: 0.0949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25000
- training_steps: 25000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 0.2244 | 2.3343 | 5000 | 0.1728 | 0.1110 |
| 0.1471 | 4.6685 | 10000 | 0.1515 | 0.0996 |
| 0.149 | 7.0028 | 15000 | 0.1428 | 0.0949 |
| 0.0697 | 9.3371 | 20000 | 0.1436 | 0.0940 |
| 0.0374 | 11.6713 | 25000 | 0.1561 | 0.0972 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ainth89/fake_planet_3
|
ainth89
| 2024-09-12T19:14:06Z | 125 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T18:46:33Z |
---
library_name: transformers
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: fake_planet_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake_planet_3
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
SicariusSicariiStuff/ZeusLabs_Chronos-Divergence-33B_FP8
|
SicariusSicariiStuff
| 2024-09-12T19:07:06Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"roleplay",
"storywriting",
"llama1",
"finetune",
"pytorch",
"conversational",
"base_model:elinas/chronos-33b",
"base_model:finetune:elinas/chronos-33b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T18:50:59Z |
---
license: cc-by-nc-4.0
base_model: elinas/chronos-33b
tags:
- roleplay
- storywriting
- llama1
- finetune
- transformers
- pytorch
---
# Zeus Labs ~ Chronos-Divergence-33B

The original model, LLaMA 1 was pre-trained at a sequence length of 2048 tokens. We went through two individual runs, targeting a sequence length of 16,384 which is a
significant increase over the original length. While it was originally pre-trained on 1.4T tokens, it was shown to respond positively to our 500M token train and will
coherently write and keep the same writing format (granted some caveats) up to 12K tokens relatively consistently.
Chronos-Divergence-33B is a one of a kind model which is based on the original [Chronos-33B](https://huggingface.co/elinas/chronos-33b) and now focuses on prompt adherence for *roleplay* and storywriting.
It was trained at 16,834 tokens and can go up to around 12,000 tokens before any deterioration without the use of RoPE or other model extending techniques.
**The unique aspect of this model is that is has little to no "GPT-isms" or commonly referred to "slop" which are repetitive phrases many modern LLMs
output due to their pre-training and finetuning datasets. We completely cleaned our datasets and relied on the original "charm" of the L1 series and might bring this
to more of the smaller models if this gains traction. It also avoids ["purple prose"](https://en.wikipedia.org/wiki/Purple_prose) in the same way.**
RoPE or RULER has not been tested as we are satisfied with our results, we will also run evaluations, but are not expecting much from a dated model, focused on RP intelligence.
Next steps would be to implement GQA (Grouped Query Attention) to as the number of tokens you input increases, so will memory usage, and this technique has been shown to reduce
memory burden. This will require significant effort on our part (help welcome!) and we hope that quantizations will be sufficient in the meantime.
The datasets used do not have a planned release date, though it is less the data and more the technique that was able to make this "dated" model very special and unlike many of us
have experienced before due to the modernization added to the model without the common phrases GPTs like to output today, though making it uncensored as a result.
Without spoiling anything, the name of the model and presented character have meaning... Look up Steins;Gate if you are not familiar :)
## Instruct Template
This model uses `ChatML` - below is an example. It is a preset in many frontends.
```
<|im_start|>system
A system prompt describing how you'd like your bot to act.<|im_end|>
<|im_start|>user
Hello there!<|im_end|>
<|im_start|>assistant
I can assist you or we can discuss other things?<|im_end|>
<|im_start|>user
I was wondering how transformers work?<|im_end|>
<|im_start|>assistant
```
## Quantization
#### LlamaCPP
[@bartowski](https://huggingface.co/bartowski/Chronos-Divergence-33B-GGUF)
[@mradermacher](https://huggingface.co/mradermacher/Chronos-Divergence-33B-i1-GGUF)
#### Exllama2
[@elinas - 8.0bpw](https://huggingface.co/ZeusLabs/Chronos-Divergence-33B-exl2-8.0bpw)
[@SicariusSicariiStuff - 6.0bpw](https://huggingface.co/SicariusSicariiStuff/ZeusLabs_Chronos-Divergence-33B-EXL2-6.0bpw)
[@SicariusSicariiStuff - 4.0bpw](https://huggingface.co/SicariusSicariiStuff/ZeusLabs_Chronos-Divergence-33B-EXL2-4.0bpw)
[More quants avaliable here](https://huggingface.co/collections/SicariusSicariiStuff/zeuslabs-chronos-divergence-33b-exl2-quants-66e218145b1fc436d9e56d6f)
## Sampling Settings
Here are some settings that work well with this model:
```
Temp -> 0.7 (1.0 max)
Min P -> 0.05-0.10
Presence Penalty -> 1.0
Repetition Penalty range -> 2800
```
## Credit
Thank you to my team consisting of [@Fizzarolli](https://huggingface.co/Fizzarolli) and [@ToastyPigeon](https://huggingface.co/ToastyPigeon) and myself [@elinas](https://huggingface.co/elinas).
Fizz graciously provided compute for us to run this (dumb), but fun experiment on, while Toasty assisted in dataset preperation! I ran the MLOps in the meantime.
## Additional Details
Please be mindful of the license. This is strictly non-commercial (by Meta LLaMA terms as well), but free to use at your own leisure personally.
If you have any questions or concerns, please post in the community tab.
DISCLAIMER: Outputs generated by the model are not reflective of our views.
|
ostapbodnar/Phi3.5-mini-ua-artificial
|
ostapbodnar
| 2024-09-12T19:07:05Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T19:02:43Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** ostapbodnar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Abdelwahab201/distilbert-base-uncased-finetuned-emotion
|
Abdelwahab201
| 2024-09-12T19:02:23Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-12T18:50:41Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7515 | 1.0 | 250 | 0.3076 | 0.906 | 0.9059 |
| 0.2391 | 2.0 | 500 | 0.2108 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
QuantFactory/Maxtopia-13B-GGUF
|
QuantFactory
| 2024-09-12T19:02:17Z | 21 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:Gryphe/MythoMax-L2-13b",
"base_model:merge:Gryphe/MythoMax-L2-13b",
"base_model:Undi95/Utopia-13B",
"base_model:merge:Undi95/Utopia-13B",
"endpoints_compatible",
"region:us"
] | null | 2024-09-12T16:24:35Z |
---
base_model:
- Gryphe/MythoMax-L2-13b
- Undi95/Utopia-13B
library_name: transformers
tags:
- mergekit
- merge
---
[](https://hf.co/QuantFactory)
# QuantFactory/Maxtopia-13B-GGUF
This is quantized version of [ClaudioItaly/Maxtopia-13B](https://huggingface.co/ClaudioItaly/Maxtopia-13B) created using llama.cpp
# Original Model Card
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
* [Undi95/Utopia-13B](https://huggingface.co/Undi95/Utopia-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Undi95/Utopia-13B
- model: Gryphe/MythoMax-L2-13b
merge_method: slerp
base_model: Undi95/Utopia-13B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
airev-ai/Amal-70b-v4.1
|
airev-ai
| 2024-09-12T18:59:55Z | 2,747 | 0 | null |
[
"safetensors",
"qwen2",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-09-12T17:00:29Z |
---
license: apache-2.0
---
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oyemade/speecht5_finetuned_yoruba
|
oyemade
| 2024-09-12T18:57:11Z | 86 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-09-12T17:25:17Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_yoruba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_yoruba
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6226 | 0.6126 | 100 | 0.5479 |
| 0.5421 | 1.2251 | 200 | 0.5122 |
| 0.5263 | 1.8377 | 300 | 0.4819 |
| 0.5075 | 2.4502 | 400 | 0.4762 |
| 0.4958 | 3.0628 | 500 | 0.4657 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mrTvister/HSP
|
mrTvister
| 2024-09-12T18:54:48Z | 100 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-09-12T18:54:40Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
red and blue vector style poster with a border, fox with cupcake, bottom of
poster has bold blue text "Anna" <lora:Hope_Style_Poster__Flux:1>
output:
url: images/00038-2175447700.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: red and blue vector style poster, bottom of poster has bold blue text "Text"
---
# Hope style poster
<Gallery />
## Trigger words
You should use `red and blue vector style poster` to trigger the image generation.
You should use `bottom of poster has bold blue text "Text"` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mrTvister/HSP/tree/main) them in the Files & versions tab.
|
shuzyuan/t5_large_sage_snli
|
shuzyuan
| 2024-09-12T18:53:25Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-09-12T18:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ianssens/e5-model-rag-v2
|
ianssens
| 2024-09-12T18:50:22Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-09-12T18:39:32Z |
---
library_name: transformers
---
# Model Card
🏆 Fine-Tuned Model from AI Talent Hub Hackathon
This model was fine-tuned during the AI Talent Hub Hackathon on a custom-generated dataset. The base model was fine-tuned to improve its performance on specific tasks related to "semantic search".
It's second version of the model.
## Model Details
### Model Description
**Train Dataset**
34k - ['queries', 'corpus', 'relevant_docs', 'mode']
**Split Info**
chunk_size=512
chunk_overlap=20
- **Developed by:** ianssens
- **Model type:** text embedding
- **Language(s) (NLP):** Russian
- **License:** MIT
- **Finetuned from model [optional]:** e5-large
|
knowledgator/Qwen2-0.5Bchp-15k
|
knowledgator
| 2024-09-12T18:37:50Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T18:37:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sayed99/xlm-roberta-base-finetuned-panx-de
|
sayed99
| 2024-09-12T18:37:36Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T17:39:54Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Habiba-Eid/xlm-roberta-base-finetuned-panx-all
|
Habiba-Eid
| 2024-09-12T18:34:33Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T18:18:27Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- F1: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.299 | 1.0 | 835 | 0.2074 | 0.8078 |
| 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
| 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF
|
mradermacher
| 2024-09-12T18:21:20Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:EpistemeAI2/Firebal-Llama-3.1-8B-R1",
"base_model:quantized:EpistemeAI2/Firebal-Llama-3.1-8B-R1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-12T14:46:13Z |
---
base_model: EpistemeAI2/Firebal-Llama-3.1-8B-R1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EpistemeAI2/Firebal-Llama-3.1-8B-R1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
aakarsh-nair/Baby-Llama-95M-Seq-4
|
aakarsh-nair
| 2024-09-12T18:19:32Z | 125 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T18:19:05Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: Baby-Llama-95M-Seq-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baby-Llama-95M-Seq-4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 8.6297 | 1.0 | 1033 | 8.4760 |
| 3.3509 | 2.0 | 2066 | 3.8582 |
| 2.5888 | 3.0 | 3099 | 2.9881 |
| 2.1873 | 4.0 | 4132 | 2.4774 |
| 1.9476 | 5.0 | 5165 | 2.2827 |
| 1.8391 | 6.0 | 6198 | 2.1369 |
| 1.7294 | 7.0 | 7231 | 2.0072 |
| 1.6532 | 8.0 | 8264 | 1.9444 |
| 1.6399 | 9.0 | 9297 | 1.9106 |
| 1.6131 | 10.0 | 10330 | 1.9045 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Tokenizers 0.19.1
|
Habiba-Eid/xlm-roberta-base-finetuned-panx-en
|
Habiba-Eid
| 2024-09-12T18:18:24Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T18:13:44Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3905
- F1: 0.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0479 | 1.0 | 50 | 0.4854 | 0.5857 |
| 0.4604 | 2.0 | 100 | 0.3995 | 0.6605 |
| 0.3797 | 3.0 | 150 | 0.3905 | 0.6861 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Habiba-Eid/xlm-roberta-base-finetuned-panx-it
|
Habiba-Eid
| 2024-09-12T18:13:40Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T18:10:00Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- F1: 0.8321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7217 | 1.0 | 70 | 0.3193 | 0.7343 |
| 0.2736 | 2.0 | 140 | 0.2760 | 0.8055 |
| 0.1838 | 3.0 | 210 | 0.2619 | 0.8321 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
LORDPRITHWISH/DARK_MYSTERY
|
LORDPRITHWISH
| 2024-09-12T18:09:07Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-12T17:05:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aradwan1/xlm-roberta-base-finetuned-panx-en
|
aradwan1
| 2024-09-12T18:04:38Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T18:02:33Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3905
- F1: 0.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0479 | 1.0 | 50 | 0.4854 | 0.5857 |
| 0.4604 | 2.0 | 100 | 0.3995 | 0.6605 |
| 0.3797 | 3.0 | 150 | 0.3905 | 0.6861 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Habiba-Eid/xlm-roberta-base-finetuned-panx-de-fr
|
Habiba-Eid
| 2024-09-12T18:02:49Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T17:47:02Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
aradwan1/xlm-roberta-base-finetuned-panx-it
|
aradwan1
| 2024-09-12T18:02:28Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T18:00:08Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- F1: 0.8321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7217 | 1.0 | 70 | 0.3193 | 0.7343 |
| 0.2736 | 2.0 | 140 | 0.2760 | 0.8055 |
| 0.1838 | 3.0 | 210 | 0.2619 | 0.8321 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
aradwan1/xlm-roberta-base-finetuned-panx-de-fr
|
aradwan1
| 2024-09-12T17:54:44Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T17:42:35Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
seddiktrk/pegasus-samsum
|
seddiktrk
| 2024-09-12T17:51:40Z | 113 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:Samsung/samsum",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-09-12T12:23:57Z |
---
base_model: google/pegasus-cnn_dailymail
model-index:
- name: pegasus-samsum
results: []
datasets:
- Samsung/samsum
language:
- en
metrics:
- rouge
pipeline_tag: summarization
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on
[SAMSum](https://huggingface.co/datasets/Samsung/samsum) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3839
# Intended uses & limitations
## Intended uses:
* Dialogue summarization (e.g., chat logs, meetings)
* Text summarization for conversational datasets
## Limitations:
* May struggle with very long conversations or non-dialogue text.
# Training procedure
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
## Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6026 | 0.5431 | 500 | 1.4875 |
| 1.4737 | 1.0861 | 1000 | 1.4040 |
| 1.4735 | 1.6292 | 1500 | 1.3839 |
### Test results
| rouge1 | rouge2 | rougeL | rougeLsum Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.427614 | 0.200571 | 0.340648 | 0.340738 |
## How to use
You can use this model with the transformers library for dialogue summarization. Here's an example in Python:
```python
from transformers import pipeline
import torch
device = 0 if torch.cuda.is_available() else -1
pipe = pipeline("summarization",
model="seddiktrk/pegasus-samsum",
device=device)
custom_dialogue = """\
Seddik: Hey, have you tried using PEGASUS for summarization?
John: Yeah, I just started experimenting with it last week!
Seddik: It's pretty powerful, especially for abstractive summaries.
John: I agree! The results are really impressive.
Seddik: I was thinking of using it for my next project. Want to collaborate?
John: Absolutely! We could make some awesome improvements together.
Seddik: Perfect, let's brainstorm ideas this weekend.
John: Sounds like a plan!
"""
# Summarize dialogue
gen_kwargs = {"length_penalty": 0.8, "num_beams":8, "max_length": 128}
print(pipe(custom_dialogue, **gen_kwargs)[0]["summary_text"])
```
Example Output
```
John started using PEG for summarization last week. Seddik is thinking of using it for his next project.
John and Seddik will brainstorm ideas this weekend.
```
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Nabokov/gemma-2-Ifable-9B-Q5_K_S-GGUF
|
Nabokov
| 2024-09-12T17:49:28Z | 15 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ifable/gemma-2-Ifable-9B",
"base_model:quantized:ifable/gemma-2-Ifable-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-12T17:45:39Z |
---
base_model: ifable/gemma-2-Ifable-9B
tags:
- llama-cpp
- gguf-my-repo
---
# Nabokov/gemma-2-Ifable-9B-Q5_K_S-GGUF
This model was converted to GGUF format from [`ifable/gemma-2-Ifable-9B`](https://huggingface.co/ifable/gemma-2-Ifable-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ifable/gemma-2-Ifable-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nabokov/gemma-2-Ifable-9B-Q5_K_S-GGUF --hf-file gemma-2-ifable-9b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nabokov/gemma-2-Ifable-9B-Q5_K_S-GGUF --hf-file gemma-2-ifable-9b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nabokov/gemma-2-Ifable-9B-Q5_K_S-GGUF --hf-file gemma-2-ifable-9b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nabokov/gemma-2-Ifable-9B-Q5_K_S-GGUF --hf-file gemma-2-ifable-9b-q5_k_s.gguf -c 2048
```
|
Huertas97/smollm-gec-sftt-kto
|
Huertas97
| 2024-09-12T17:46:21Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"kto",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T17:44:53Z |
---
library_name: transformers
tags:
- trl
- kto
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gmashaly/xlm-roberta-base-finetuned-panx-de
|
gmashaly
| 2024-09-12T17:34:25Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-06T23:15:33Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
SongTonyLi/gemma-2b-it-SFT-D_chosen-HuggingFaceH4-ultrafeedback_binarized-large
|
SongTonyLi
| 2024-09-12T17:30:52Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T17:28:08Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Habiba-Eid/xlm-roberta-base-finetuned-panx-de
|
Habiba-Eid
| 2024-09-12T17:27:23Z | 135 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T17:17:00Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1505 | 0.8246 |
| 0.1268 | 2.0 | 1050 | 0.1380 | 0.8503 |
| 0.0794 | 3.0 | 1575 | 0.1363 | 0.8658 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/TherapyBeagle-11B-v1-GGUF
|
mradermacher
| 2024-09-12T17:11:59Z | 81 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:jerryjalapeno/nart-100k-synthetic",
"base_model:victunes/TherapyBeagle-11B-v1",
"base_model:quantized:victunes/TherapyBeagle-11B-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T14:33:57Z |
---
base_model: victunes/TherapyBeagle-11B-v1
datasets:
- jerryjalapeno/nart-100k-synthetic
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/victunes/TherapyBeagle-11B-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TherapyBeagle-11B-v1-GGUF/resolve/main/TherapyBeagle-11B-v1.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
inflatebot/G2-9B-Blackout-R1
|
inflatebot
| 2024-09-12T17:07:55Z | 8 | 8 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:IntervitensInc/gemma-2-9b-chatml",
"base_model:merge:IntervitensInc/gemma-2-9b-chatml",
"base_model:anthracite-org/magnum-v3-9b-chatml",
"base_model:merge:anthracite-org/magnum-v3-9b-chatml",
"base_model:crestf411/gemma2-9B-sunfall-v0.5.2",
"base_model:merge:crestf411/gemma2-9B-sunfall-v0.5.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T00:05:24Z |
---
base_model:
- crestf411/gemma2-9B-sunfall-v0.5.2
- IntervitensInc/gemma-2-9b-chatml
- anthracite-org/magnum-v3-9b-chatml
library_name: transformers
tags:
- mergekit
- merge
---

`A lot of punch in a little package.`
[GGUFs available courtesy of mradermacher](https://huggingface.co/mradermacher/G2-9B-Blackout-R1-GGUF)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
A simple task-arithmetic merge of Magnum-v3-9B with just a pinch of Sunfall, to loosen it up a little bit. Does the horny real good, but also has a depth of character that Magnum lacked.
**Uses ChatML formatting,** which in and of itself is a massive upgrade to Gemma2. (Who ships a model without a system prompt in 2024? Come on, Google.)
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [IntervitensInc/gemma-2-9b-chatml](https://huggingface.co/IntervitensInc/gemma-2-9b-chatml) as a base.
### Models Merged
The following models were included in the merge:
* [crestf411/gemma2-9B-sunfall-v0.5.2](https://huggingface.co/crestf411/gemma2-9B-sunfall-v0.5.2)
* [anthracite-org/magnum-v3-9b-chatml](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: anthracite-org/magnum-v3-9b-chatml
parameters:
weight: 1
- model: crestf411/gemma2-9B-sunfall-v0.5.2
parameters:
weight: 0.3
merge_method: task_arithmetic
base_model: IntervitensInc/gemma-2-9b-chatml
dtype: float32
tokenizer_source: base
parameters:
normalize: true
```
.300 AAC Blackout is an intermediate cartridge designed for use in the M4 Carbine, packing a significantly larger projectile into a cartridge compatible with 5.56mm NATO magazines, only requiring a barrel change.
|
TroyDoesAI/AgentRAG-3B
|
TroyDoesAI
| 2024-09-12T17:07:31Z | 7 | 0 | null |
[
"safetensors",
"llama",
"license:artistic-2.0",
"region:us"
] | null | 2024-09-09T23:50:01Z |
---
license: artistic-2.0
---
```
{
"prompt,chosen,rejected": "%prompt%\n<|RAG|>\n%chosen%"
}
```
|
Elvijs/segformer-b0-scene-parse-150
|
Elvijs
| 2024-09-12T17:02:03Z | 5 | 0 | null |
[
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] | null | 2024-09-12T16:09:43Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2415
- Mean Iou: 0.0044
- Mean Accuracy: 0.0212
- Overall Accuracy: 0.0625
- Per Category Iou: [0.06352239323592722, 0.0197948041351654, 0.045329518997010716, 0.0, 0.0, 0.04870301462914723, 0.02500211303776811, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.09103573519396886, 0.02510679055839352, 0.1716735511328076, 0.0, 0.0, 0.6087197069400552, 0.07979126248845822, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.9447 | 1.0 | 20 | 5.0273 | 0.0011 | 0.0073 | 0.0096 | [0.0, 0.0, 0.002276944365820675, 0.015115643138154297, 0.02816672568583748, 0.020444081187849066, 0.002690279189045338, 0.0, 0.0, 0.006673206651406702, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.010983055131582785, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.007840077286069929, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.016944924540740872, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0017587571449509014, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan] | [0.0, 0.0, 0.002517350245476419, 0.016500023109276063, 0.08131381769636908, 0.031451971196968805, 0.0028841465312431916, nan, 0.0, 0.019423293852227862, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.14621756293276386, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.010543673795722566, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.02100984215002927, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.00262582056892779, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.4665 | 2.0 | 40 | 4.8604 | 0.0021 | 0.0118 | 0.0289 | [0.0, 0.0, 0.014737193929340696, 0.06988486465315293, 0.03046756358059445, 0.02540572690967797, 0.00010281296265833196, 0.0, 0.0, 0.01366279675677929, 0.0, 0.0, 0.0, 0.0, 0.0, 0.008863163606764946, 0.0, 0.0, 0.0, 0.007474172644300944, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0001948811225152657, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan] | [0.0, 0.0, 0.033114977538559964, 0.14569243094178003, 0.21933443178948128, 0.037596506081664476, 0.00010374627810227308, nan, 0.0, 0.04621404399323181, nan, 0.0, 0.0, 0.0, 0.0, 0.015782269103792577, 0.0, 0.0, 0.0, 0.0425807286611014, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0002188183807439825, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.6055 | 3.0 | 60 | 4.6528 | 0.0025 | 0.0170 | 0.0427 | [0.0008755445459981208, 0.0028131207387126216, 0.03896883034287781, 0.07237229483867014, 0.028296129514019188, 0.022904780368913753, 0.0012455791033648819, 0.0, 0.0, 0.009228030728873899, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008380022027486472, 0.0, 0.0, 0.008366811066707225, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan] | [0.0008817309645276527, 0.0028844460102166636, 0.21049081375780587, 0.150067787209786, 0.2693422492693663, 0.04159506425356056, 0.0014213240100011413, nan, 0.0, 0.027777777777777776, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008875589592737232, 0.0, 0.0, 0.07742833106840095, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.5631 | 4.0 | 80 | 4.4866 | 0.0037 | 0.0193 | 0.0498 | [0.0, 0.004342766463898439, 0.043572963505144335, 0.06631372332755307, 0.02845107633074987, 0.039998389499536983, 0.0012175476716742217, nan, 0.0, 0.011519430986496238, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.004804138950480414, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0, 0.004519332393870002, 0.24859181374389785, 0.12504814432513212, 0.2730224782613925, 0.20819327202769652, 0.00134870161532955, nan, 0.0, 0.01575719120135364, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01303953924804245, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.0203 | 5.0 | 100 | 4.4061 | 0.0047 | 0.0204 | 0.0580 | [0.0, 0.004366973835165242, 0.04863978493326222, 0.08693516775748797, 0.026976433771613906, 0.03501771016487017, 0.018841082335954475, nan, 0.0, 0.018619856442328192, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0014569380801315944, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0, 0.004546855733662146, 0.28032294404806607, 0.21543237455514644, 0.23917880380530868, 0.11519535932535857, 0.05523451846165019, nan, 0.0, 0.027707275803722505, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0020060829612373, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.2491 | 6.0 | 120 | 4.3102 | 0.0052 | 0.0212 | 0.0597 | [0.0, 0.010156985242586252, 0.04821919554131756, 0.08728253606869757, 0.025662295743393182, 0.04330914802902237, 0.02511118652007964, nan, 0.0, 0.014082940286553351, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0, 0.011168971287651928, 0.25256950529199873, 0.203766041689134, 0.19123961177192203, 0.20870461804129328, 0.08675263774912075, nan, 0.0, 0.022313874788494076, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.9411 | 7.0 | 140 | 4.2501 | 0.0046 | 0.0209 | 0.0538 | [0.0006650489371135445, 0.021160412639338044, 0.04352778140764725, 0.058604298518338, 0.021560223265454898, 0.049498422764399294, 0.027036493428514243, nan, 0.0, 0.004585721730067743, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0006666746317160301, 0.02578936938523868, 0.18762604136242889, 0.09169375587360766, 0.11809205383236918, 0.43236401129990865, 0.1009036300822708, nan, 0.0, 0.006204173716864072, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.6726 | 8.0 | 160 | 4.2067 | 0.0032 | 0.0212 | 0.0549 | [0.0011396255924622587, 0.005313266842897916, 0.057677061675362046, 0.000412568045076879, 0.01644008030206892, 0.04191106479199666, 0.027516933622802536, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0011421880787106178, 0.005554209970054606, 0.6541494555013143, 0.00041596696914141337, 0.038017006025472956, 0.18675865306430386, 0.08770710350766166, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.0818 | 9.0 | 180 | 4.1711 | 0.0032 | 0.0219 | 0.0520 | [0.003375523419770787, 0.004138669266803739, 0.05298308836465968, 0.0, 0.004480239508333246, 0.0444491240408815, 0.03994623015170012, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0033978900584236372, 0.004299145675532852, 0.425119261206381, 0.0, 0.005111429155591904, 0.18224874887881098, 0.385988027679507, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.2069 | 10.0 | 200 | 4.1246 | 0.0033 | 0.0211 | 0.0519 | [0.002796000842824979, 0.014403223894749318, 0.051211269712767585, 0.0, 0.010172529688550303, 0.04962548962067011, 0.021981361731066087, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.002822016989450292, 0.016723181257706537, 0.2951697472914146, 0.0, 0.0163806270821557, 0.5627740102101548, 0.07578665615371048, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.7646 | 11.0 | 220 | 4.0875 | 0.0027 | 0.0208 | 0.0525 | [0.0014987011256910676, 0.004268206568644374, 0.05289061881353971, 1.919098484296017e-05, 0.00046573660478015116, 0.04878741279309468, 0.017819837803108977, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0015053943296813581, 0.004491809054077858, 0.4226714510229343, 1.925773005284321e-05, 0.000529183253755397, 0.4811011543007553, 0.04575210864310243, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.811 | 12.0 | 240 | 4.1005 | 0.0029 | 0.0215 | 0.0527 | [0.003542311728126988, 0.0026537997587454767, 0.051803486673507246, 0.0, 0.0035705203194140683, 0.050551495800347825, 0.022851681837409328, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0036057778468082057, 0.0027248106394222303, 0.28507948429089996, 0.0, 0.004221439137912372, 0.6179323179063315, 0.07566216061998776, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.1478 | 13.0 | 260 | 4.2043 | 0.0025 | 0.0216 | 0.0565 | [0.0009899223053767535, 0.00282525101474044, 0.05731283127021918, 0.0, 0.0, 0.04795680311141858, 0.006619970243568097, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0009940381605515, 0.0029119693500088075, 0.5897484040555764, 0.0, 0.0, 0.3934011216081413, 0.008216705225700028, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.7627 | 14.0 | 280 | 4.1817 | 0.0024 | 0.0217 | 0.0585 | [0.0002459660375826553, 0.0025780841275234753, 0.05982262981909267, 0.0, 0.0, 0.04231943409646513, 0.0053726384249686305, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.00024612002532885697, 0.0026312312841289412, 0.7785844424973227, 0.0, 0.0, 0.20997879171451803, 0.006307773708618203, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.0883 | 15.0 | 300 | 4.2249 | 0.0024 | 0.0218 | 0.0587 | [2.3885958877933197e-05, 0.002817655881722734, 0.059495393904988, 0.0, 0.0, 0.04464230414484267, 0.00563993511132778, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [2.389514809018029e-05, 0.002895455346133521, 0.7695998664830809, 0.0, 0.0, 0.22355880060020286, 0.006961375260662523, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.4081 | 16.0 | 320 | 4.5586 | 0.0021 | 0.0215 | 0.0588 | [0.0, 0.0011548413454991148, 0.060751608072929474, 0.0, 0.0, 0.03413381426653564, 0.0004046185729793743, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0, 0.0011669896071868945, 0.8588197660672313, 0.0, 0.0, 0.12724133017025308, 0.00042535974021931964, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.8017 | 17.0 | 340 | 4.2870 | 0.0027 | 0.0219 | 0.0580 | [0.0, 0.0019801130113279637, 0.05843440556089608, 0.0, 0.0, 0.04972859011201945, 0.013719436694127552, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0, 0.002031222476660208, 0.6953658502663385, 0.0, 0.0, 0.28590948337287186, 0.02586394713089668, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.5659 | 18.0 | 360 | 4.3380 | 0.0026 | 0.0219 | 0.0589 | [8.117774583725295e-05, 0.0015760269989728653, 0.05966897925966962, 0.0, 0.0, 0.0453954724893633, 0.011357581536218065, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [8.124350350661298e-05, 0.0015963537079443368, 0.7729656054853201, 0.0, 0.0, 0.2181435624889977, 0.01680689705256824, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.1801 | 19.0 | 380 | 4.5501 | 0.0022 | 0.0215 | 0.0587 | [8.350750613183688e-05, 0.0014868147309028722, 0.060201314963400414, 0.0, 0.0, 0.03590346445869424, 0.004685068669237535, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [8.363301831563101e-05, 0.001502774352651048, 0.8495292137800587, 0.0, 0.0, 0.1300579246058025, 0.005747543806865929, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.578 | 20.0 | 400 | 4.3040 | 0.0028 | 0.0215 | 0.0575 | [0.0, 0.00441232564612736, 0.05900299774725502, 0.0, 0.0, 0.043719274768370085, 0.022398102481616014, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0, 0.004711995772415008, 0.7633343068942018, 0.0, 0.0, 0.1577460538338377, 0.06544315222691385, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.5732 | 21.0 | 420 | 4.2526 | 0.0033 | 0.0218 | 0.0562 | [0.01197880620383712, 0.004653575782729372, 0.055398493540596415, 0.0, 0.0, 0.05264249405452927, 0.025620698438999074, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.012614248676806175, 0.004937687158710587, 0.4806122307061098, 0.0, 0.0, 0.3915233919844417, 0.11165174449366629, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.7481 | 22.0 | 440 | 4.2137 | 0.0035 | 0.0219 | 0.0607 | [0.03470063452352939, 0.0038559799425570067, 0.056552404316332114, 0.0, 0.0, 0.04991736078453132, 0.01398777835527544, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.04017013345440208, 0.004012902941694557, 0.474506613259899, 0.0, 0.0, 0.4600186096418063, 0.028758468289950097, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.4805 | 23.0 | 460 | 4.2459 | 0.0029 | 0.0214 | 0.0540 | [0.019968861968947278, 0.004132758217708178, 0.04579704079974535, 0.0, 0.0, 0.04999965595960013, 0.011986916010697728, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.022373027156835805, 0.004282631671657566, 0.14307172361997747, 0.0, 0.0, 0.7918737897445784, 0.02459824253804895, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.0486 | 24.0 | 480 | 4.1676 | 0.0033 | 0.0215 | 0.0588 | [0.04355649523723838, 0.001765382310545325, 0.048104443624259224, 0.0, 0.0, 0.049962388583815284, 0.010306310117694065, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.055527545131960954, 0.0018000264224062004, 0.1921461453943617, 0.0, 0.0, 0.7199081253719832, 0.0181970971791387, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.0748 | 25.0 | 500 | 4.6574 | 0.0027 | 0.0213 | 0.0560 | [0.03586399807836847, 0.0014191461559882612, 0.03690721280366637, 0.0, 0.0, 0.04891988177711094, 0.002104860313815538, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.04423947717415979, 0.0014587370089836182, 0.07168189594025118, 0.0, 0.0, 0.8620455517088178, 0.002282418118250008, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.9136 | 26.0 | 520 | 4.6604 | 0.0024 | 0.0214 | 0.0528 | [0.015378752409054779, 0.002252457592122834, 0.0412171604470604, 0.0, 0.0, 0.04899349527367102, 0.0009124920156948627, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.01687475358128532, 0.002311960542540074, 0.09004047231610131, 0.0, 0.0, 0.8735214974893749, 0.0010374627810227307, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.4323 | 27.0 | 540 | 4.6509 | 0.0021 | 0.0213 | 0.0512 | [0.008157874640060734, 0.0032929673531907626, 0.028872494733423817, 0.0, 0.0, 0.04957735317827306, 0.007474147640012287, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.008678717786353481, 0.0033963801303505373, 0.048226033017621454, 0.0, 0.0, 0.9104473858482895, 0.010602869622052308, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.1896 | 28.0 | 560 | 5.2046 | 0.0018 | 0.0213 | 0.0502 | [0.0018920989684556736, 0.0023472917517134693, 0.024999889086561283, 0.0, 0.0, 0.04951497115374738, 0.005775031859463471, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0019355069953046033, 0.0024055398978333627, 0.03918582495375586, 0.0, 0.0, 0.9293168920221639, 0.007428233512122752, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.2059 | 29.0 | 580 | 4.8114 | 0.0024 | 0.0215 | 0.0517 | [0.008470685096344147, 0.0008811722854998205, 0.03726879543572283, 0.0, 0.0, 0.04982524192229595, 0.013581450751183851, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.008948732959772518, 0.0008917562092654571, 0.07885843034171987, 0.0, 0.0, 0.8760530793927557, 0.02413138428658872, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3919 | 30.0 | 600 | 5.1191 | 0.0027 | 0.0215 | 0.0513 | [0.0030311354884970504, 0.008587244874488215, 0.0409836925449386, 0.0, 0.0, 0.050577819835394514, 0.01984300595507121, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0031111482813414738, 0.009247842170160296, 0.10867025493386741, 0.0, 0.0, 0.8127467663651681, 0.05693595742252747, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.074 | 31.0 | 620 | 5.1277 | 0.0024 | 0.0218 | 0.0535 | [0.002405858724589146, 0.0019974334054628728, 0.05209079401332492, 0.0, 0.0, 0.049859000336766386, 0.004391811066680337, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0024444736496254437, 0.0020477364805354943, 0.2543984089233808, 0.0, 0.0, 0.7372016798973955, 0.005332558694456836, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.9467 | 32.0 | 640 | 4.8383 | 0.0027 | 0.0217 | 0.0542 | [0.00884771943766523, 0.006017525455750573, 0.05172694595722066, 0.0, 0.0, 0.050596522852873506, 0.008527830257230053, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.009364508536541655, 0.006346882156068346, 0.2517837025910627, 0.0, 0.0, 0.7152808630850092, 0.015219578997603461, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8523 | 33.0 | 660 | 4.8227 | 0.0027 | 0.0216 | 0.0551 | [0.017259111376148373, 0.0015799321653698281, 0.05228438111187712, 0.0, 0.0, 0.04917695825696716, 0.0039672549547281515, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.019087444294436016, 0.001612867711819623, 0.24576153321928762, 0.0, 0.0, 0.7220038057555767, 0.00474120490927388, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8944 | 34.0 | 680 | 5.0215 | 0.0027 | 0.0215 | 0.0546 | [0.020036138673218466, 0.0023797962766326417, 0.04781831194540746, 0.0, 0.0, 0.049451322269670125, 0.0034373806465053296, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.022760128555896725, 0.0024550819094592215, 0.16679879278452317, 0.0, 0.0, 0.7928042718348939, 0.004108352612850014, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3328 | 35.0 | 700 | 4.8829 | 0.0028 | 0.0217 | 0.0562 | [0.021031464544060196, 0.0009239148053295296, 0.05259422094660883, 0.0, 0.0, 0.04985829010272124, 0.004742535990953739, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.024017013345440208, 0.0009412982208913158, 0.2602119581090666, 0.0, 0.0, 0.7085746858575105, 0.005830540829347747, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.625 | 36.0 | 720 | 5.0325 | 0.0032 | 0.0216 | 0.0594 | [0.04025403959132762, 0.001005084864826893, 0.05367145502271344, 0.0, 0.0, 0.049003298464218255, 0.005375594491177519, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0505095640330231, 0.0010238682402677471, 0.2876455125797972, 0.0, 0.0, 0.6452264592222511, 0.007199991700297751, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.8508 | 37.0 | 740 | 5.3534 | 0.0041 | 0.0215 | 0.0631 | [0.061565480359933186, 0.021621069476838218, 0.04592210084382379, 0.0, 0.0, 0.04885083541675519, 0.0085262806793229, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.08736782996212619, 0.02796371322881804, 0.1461940724051126, 0.0, 0.0, 0.7131600345368128, 0.012729668323148906, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.0736 | 38.0 | 760 | 5.8265 | 0.0031 | 0.0215 | 0.0580 | [0.03841715914568649, 0.0020126231725381594, 0.049302348304157646, 0.0, 0.0, 0.0489528017190962, 0.0028558853733678907, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.048370948278951956, 0.0020642504844107803, 0.19649935327742313, 0.0, 0.0, 0.736740630213005, 0.003029391320586374, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.1181 | 39.0 | 780 | 4.5260 | 0.0028 | 0.0225 | 0.0584 | [0.009498588047722635, 1.6469761517853223e-05, 0.05800136797280348, 0.0, 0.0, 0.05252217169848623, 0.00782175425401899, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.010078973464438045, 1.6514003875286242e-05, 0.47529241596083505, 0.0, 0.0, 0.5367037462382537, 0.01294753550716368, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.4297 | 40.0 | 800 | 4.5453 | 0.0030 | 0.0221 | 0.0581 | [0.015113575389018065, 0.0002574058962380402, 0.05674934903610563, 0.0, 0.0, 0.05302696676995729, 0.012638915219962264, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.016556948111685923, 0.00025871939404615115, 0.472406503386601, 0.0, 0.0, 0.4996353516132548, 0.02904895786863646, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.0033 | 41.0 | 820 | 5.1741 | 0.0035 | 0.0220 | 0.0623 | [0.041866933417913, 0.0017643440632789823, 0.05790090987915049, 0.0, 0.0, 0.049137446640780284, 0.010721595035320525, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.05330529635957419, 0.0018000264224062004, 0.4476571953102182, 0.0, 0.0, 0.4911520374204689, 0.016848395563809147, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.6608 | 42.0 | 840 | 5.1182 | 0.0035 | 0.0219 | 0.0590 | [0.034754246955807644, 0.0035782164130769476, 0.05313277757044455, 0.0, 0.0, 0.05119819620969342, 0.01917268548916612, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.04287745373301951, 0.003710146203980976, 0.29899445070305003, 0.0, 0.0, 0.6144367230264978, 0.04695556546908879, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.4045 | 43.0 | 860 | 4.7092 | 0.0035 | 0.0216 | 0.0548 | [0.02550941878338748, 0.008538757053103365, 0.04603325745202767, 8.469095500600536e-05, 0.0, 0.05063631126771016, 0.03190308879205971, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.0297016690760941, 0.009396468205037872, 0.20305698112682716, 8.473401223251014e-05, 0.0, 0.6005381707225068, 0.15062922117669028, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.5675 | 44.0 | 880 | 5.0089 | 0.0045 | 0.0215 | 0.0658 | [0.07235035137001497, 0.005978911311955759, 0.05021056085755695, 0.0, 0.0, 0.04914273251366387, 0.029048066287934854, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.1106560412908159, 0.006379910163818918, 0.25537196979179705, 0.0, 0.0, 0.5061990225746691, 0.11198373258359356, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.3889 | 45.0 | 900 | 5.1724 | 0.0041 | 0.0216 | 0.0634 | [0.0574992865026941, 0.008969508681382006, 0.05358679147629314, 0.0, 0.0, 0.04923625330977958, 0.021266063242792923, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.07991493327279896, 0.009853355645587458, 0.32615679893186467, 0.0, 0.0, 0.522654304946644, 0.0548195333492411, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.9349 | 46.0 | 920 | 5.2161 | 0.0035 | 0.0217 | 0.0567 | [0.026769652929518776, 0.008834490643358789, 0.05125529735621071, 0.0, 0.0, 0.05063854007328092, 0.02470501597795409, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.03146991003476744, 0.009825832305795314, 0.26064310649365097, 0.0, 0.0, 0.6205058134173841, 0.07691749058502526, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 2.7756 | 47.0 | 940 | 6.2006 | 0.0042 | 0.0207 | 0.0678 | [0.08605449104167727, 0.01479096741785967, 0.03755570822103613, 0.0, 0.0, 0.046725837950406156, 0.008160237388724036, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.14539002855470196, 0.017829619517350712, 0.08895564734843743, 0.0, 0.0, 0.6891435373408331, 0.011412090591250038, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.1296 | 48.0 | 960 | 5.8443 | 0.0040 | 0.0209 | 0.0627 | [0.06674388043020953, 0.01825808255716406, 0.04126552804466341, 0.0, 0.0, 0.047785578143774626, 0.009119262322869347, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.09818038447293277, 0.023361810815571604, 0.11806511731408464, 0.0, 0.0, 0.707417870285767, 0.013953874404755729, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.2641 | 49.0 | 980 | 4.7815 | 0.0040 | 0.0207 | 0.0625 | [0.06625970764575073, 0.005891244125521674, 0.04717574225996364, 0.0, 0.0, 0.04820377566517319, 0.01573677883605, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.09747069857465442, 0.00628633080852563, 0.21186075297979165, 0.0, 0.0, 0.5981239469206072, 0.03884260652149104, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.2327 | 50.0 | 1000 | 5.2415 | 0.0044 | 0.0212 | 0.0625 | [0.06352239323592722, 0.0197948041351654, 0.045329518997010716, 0.0, 0.0, 0.04870301462914723, 0.02500211303776811, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.09103573519396886, 0.02510679055839352, 0.1716735511328076, 0.0, 0.0, 0.6087197069400552, 0.07979126248845822, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
distily/distily_profile_smollm
|
distily
| 2024-09-12T16:59:47Z | 25 | 0 |
Distily
|
[
"Distily",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:wikimedia/wikipedia",
"base_model:HuggingFaceTB/SmolLM-135M",
"base_model:finetune:HuggingFaceTB/SmolLM-135M",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-09-12T07:03:37Z |
---
base_model: HuggingFaceTB/SmolLM-135M
datasets:
- wikimedia/wikipedia
library_name: Distily
license: creativeml-openrail-m
tags:
- generated_from_trainer
- Distily
base_model_relation: finetune
model-index:
- name: distily_profile_smollm
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
# Model description
More information needed
# Intended uses & limitations
More information needed
-->
# Model Architecture:
- **Architecture**: `LlamaForCausalLM`
- **Total Parameters**: 81,413,568
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.15 GB
<details>
<summary>Student Model Details</summary>
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
(0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
(v_proj): Linear(in_features=576, out_features=192, bias=False)
(o_proj): Linear(in_features=576, out_features=576, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=576, out_features=1536, bias=False)
(up_proj): Linear(in_features=576, out_features=1536, bias=False)
(down_proj): Linear(in_features=1536, out_features=576, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((576,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((576,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((576,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=576, out_features=49152, bias=False)
)
```
</details>
<br/>
# Resource Usage
- Max Train VRAM Use: 12.7946 GB
- Available VRAM: 23.4329 GB
- GPUs:
- 1x NVIDIA GeForce RTX 4090
- CPUs: 64
- CPU Memory: 251.7299 GB
- CPU Memory Bandwidth: 1600 GB/s
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `LlamaForCausalLM` -> `LlamaForCausalLM`
- **Total Parameters**: 134,515,008 -> 81,413,568
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.25 GB -> 0.15 GB
<details>
<summary>Module Diff Details</summary>
```diff
--- teacher model modules
+++ student model modules
@@ -2,7 +2,7 @@
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
- (0-29): 30 x LlamaDecoderLayer(
+ (0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
```
</details>
<br/>
# Train Dataset
Trained on 84,871,894 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `99,800`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)
```
# Hyperparameters
The following hyperparameters were used during training:
<details>
<summary>Expand</summary>
- learning_rate: `0.0002`
- train_batch_size: `4`
- eval_batch_size: `2`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `polynomial`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)`
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7eb253ff9660>`
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `{'num_hidden_layers': 15}`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `False`
- student_model_use_liger: `False`
- teacher_model_name_or_path: `HuggingFaceTB/SmolLM-135M`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `100000`
- dataset_test_size: `0.002`
- dataset_shuffle: `False`
- dataset_shuffle_seed: `42`
- dataset_trust_remote_code: `False`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.0`
- warmup_steps: `0`
- gradient_checkpointing: `True`
</details>
<br/>
# Framework Versions
- Distily 0.5.0
- Transformers 4.44.2
- Pytorch 2.5.0.dev20240911+cu121
- Datasets 2.21.0
|
saadamin2k13/urdu_text_generation
|
saadamin2k13
| 2024-09-12T16:59:06Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ur",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-12T16:51:45Z |
---
language:
- ur
metrics:
- bleu
- meteor
- rouge
- chrf
- bertscore
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card lists fine-tuned byT5 model for the task of Text Generation from Meaning Representation (DRS).
## Model Details
We worked on a pre-trained byt5-base model and fine-tuned it with the Parallel Meaning Bank dataset (DRS-Text pairs dataset).
Furthermore, we enriched the gold_silver flavors of PMB (release 5.0.0) with different augmentation strategies.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use the model, follow the code below for a quick response.
```python
from transformers import ByT5Tokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = ByT5Tokenizer.from_pretrained('saadamin2k13/urdu_text_generation', max_length=512)
model = T5ForConditionalGeneration.from_pretrained('saadamin2k13/urdu_text_generation')
# Example sentence
example = "male.n.02 Name 'ٹام' yell.v.01 Agent -1 Time +1 time.n.08 TPR now"
# Tokenize and prepare the input
x = tokenizer(example, return_tensors='pt', padding=True, truncation=True, max_length=512)['input_ids']
# Generate output
output = model.generate(x)
# Decode and print the output text
pred_text = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(pred_text)
|
ontocord/phi-3-22b-128k
|
ontocord
| 2024-09-12T16:45:02Z | 12 | 1 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-22T20:45:45Z |
---
license: mit
---
## Model Summary
The Phi-3-22b is a depth upsampled version of the 14b [Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) without any finetuning. We removed the bottom 8 layers of one copy of the 14b and the top 8 layers of another copy of the 14b model and stacked them. We plan to do continued pretraining to improve performance.
Since this model has not been continued pretrained, the quality may vary.
A [GGUF version](https://huggingface.co/mradermacher/phi-3-22b-GGUF) thanks to @mradermacher!
Loading the model:
```
!pip install flash-attn --no-build-isolation
!pip install peft bitsandbytes accelerate transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("ontocord/phi-3-22b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("ontocord/phi-3-22b",
torch_dtype="auto", device_map="auto", trust_remote_code=True, )
```
Basic test
```
with torch.no_grad():
print(tokenizer.batch_decode(model.generate(**tokenizer("<|user|>\nHow to explain Internet for a medieval knight?<|end|>\n<|assistant|>\n", return_tensors="pt").to('cuda'), max_new_tokens=128), use_cache=True)[0])
```
Will produce:
```
<|user|> How to explain Internet for a medieval knight?<|end|><|assistant|> Ah, noble knight, let me attempt to explain this mystical realm known as the Internet in terms that might resonate with your medieval understanding.
Imagine, if you will, a vast kingdom stretching beyond the horizon, where countless villages, towns, and cities are connected by a network of roads, bridges, and pathways. This kingdom is not bound by physical borders, but instead, it exists in a realm beyond our own, accessible only through magical devices known as computers, tablets, and smartphs.
In this kingdom, information flows like a mighty river,...
```
To run on a Colab T4, try 4-bit
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("ontocord/phi-3-22b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("ontocord/phi-3-22b",
load_in_4bit=True, device_map="auto", trust_remote_code=True, )
with torch.no_grad():
print(tokenizer.batch_decode(model.generate(**tokenizer("<|user|>\nHow to explain Internet for a medieval knight?<|end|>\n<|assistant|>\n", return_tensors="pt").to('cuda'), max_new_tokens=128), use_cache=True)[0])
```
Will produce:
```
<|user|> How to explain Internet for a medieval knight?<|end|><|assistant|> Ah, noble knight, let me attempt to explain this mystical network known as the Internet, using terms and analogies from your time.
Imagine a vast kingdom, stretching far beyond the horizon, where countless villages, towns, and cities are connected by roads, rivers, and paths. Each village is like a castle, filled with people who share knowledge, goods, stories, and news.
Now, imagine that instead of messengers, horses, or ships, there exists a magical network of invisible threads connecting all these villages. This network is invisible to the eye, yet it allows messages, scroll
```
```
import torch
with torch.no_grad():
print(tokenizer.batch_decode(model.generate(**tokenizer("<|user|>\nExplain why it is surprising that one can build a language model small enough to fit on a phone, yet almost as powerful as ChatGPT. Just use one funny sentence.<|end|>\n<|assistant|>\n", return_tensors="pt").to('cuda'), max_new_tokens=128), use_cache=True)[0])
```
Will produce:
```
<|user|> Explain why it is surprising that one can build a language model small enough to fit on a phone, yet almost as powerful as ChatGPT. Just use one funny sentence.<|end|><|assistant|> "Who knew that fitting a ChatGPT rival in your pocket would be easier than fitting a penguin in a pocket-sized suit!"<|end|>
```
Some harder reasoning tests of the model in [colab](https://colab.research.google.com/drive/1eLoQXhysnBmN7DNNB6yElpELOSe6DHHH?usp=sharing).
See the [Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) model card for more details.
|
airev-ai/Amal-70b-v2.4
|
airev-ai
| 2024-09-12T16:43:43Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:airev-ai/Amal-70b-v2",
"base_model:adapter:airev-ai/Amal-70b-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-09-12T16:33:36Z |
---
base_model: airev-ai/Amal-70b-v2
library_name: peft
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
|
DeepAutoAI
| 2024-09-12T16:43:16Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-10T17:01:15Z |
---
library_name: transformers
tags: []
model-index:
- name: d2nwg_Llama-3.1-8B-Instruct-v0.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 78.93
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.93
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.59
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.98
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.97
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/d2nwg_Llama-3.1-8B-Instruct-v0.0
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DeepAutoAI__d2nwg_Llama-3.1-8B-Instruct-v0.0)
| Metric |Value|
|-------------------|----:|
|Avg. |27.65|
|IFEval (0-Shot) |78.93|
|BBH (3-Shot) |30.51|
|MATH Lvl 5 (4-Shot)| 7.93|
|GPQA (0-shot) | 5.59|
|MuSR (0-shot) |10.98|
|MMLU-PRO (5-shot) |31.97|
|
Elvijs/classification_vit_playaround
|
Elvijs
| 2024-09-12T16:38:36Z | 6 | 0 | null |
[
"safetensors",
"vit",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"region:us"
] | null | 2024-09-11T16:01:30Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: classification_vit_playaround
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification_vit_playaround
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6380
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7533 | 0.992 | 62 | 2.5753 | 0.83 |
| 1.8529 | 2.0 | 125 | 1.8001 | 0.865 |
| 1.5759 | 2.976 | 186 | 1.6380 | 0.89 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.3.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
SongTonyLi/gemma-2b-it-SFT-D_chosen-HuggingFaceH4-ultrafeedback_binarized
|
SongTonyLi
| 2024-09-12T16:37:42Z | 123 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T16:18:47Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saadamin2k13/urdu_semantic_parsing
|
saadamin2k13
| 2024-09-12T16:26:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text-generation-inference",
"ur",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-30T10:43:13Z |
---
language:
- ur
metrics:
- accuracy
library_name: transformers
tags:
- text-generation-inference
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card lists fine-tuned byT5 model for the task of Semantic Parsing.
## Model Details
We worked on a pre-trained byt5-base model and fine-tuned it with the Parallel Meaning Bank dataset (DRS-Text pairs dataset).
Furthermore, we enriched the gold_silver flavors of PMB (release 5.0.0) with different augmentation strategies.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use the model, follow the code below for a quick response.
```python
from transformers import ByT5Tokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = ByT5Tokenizer.from_pretrained('saadamin2k13/urdu_semantic_parsing', max_length=512)
model = T5ForConditionalGeneration.from_pretrained('saadamin2k13/urdu_semantic_parsing')
# Example sentence
example = "یہ کار کالی ہے۔"
# Tokenize and prepare the input
x = tokenizer(example, return_tensors='pt', padding=True, truncation=True, max_length=512)['input_ids']
# Generate output
output = model.generate(x)
# Decode and print the output text
pred_text = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(pred_text)
|
tronsdds/google-gemma-7b-1726158304
|
tronsdds
| 2024-09-12T16:25:52Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"region:us"
] | null | 2024-09-12T16:25:04Z |
---
base_model: google/gemma-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
somu9/whisper-small-alb
|
somu9
| 2024-09-12T16:10:02Z | 15 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sq",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-08T13:00:47Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small Albanian - Sumitesh
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_17_0
config: sq
split: None
args: 'config: sq, split: test'
metrics:
- name: Wer
type: wer
value: 52.63324873096447
language:
- sq
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Alb - Sumitesh
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2013
- Wer: 52.6332
## Model description
This is a speech to text model finetuned over Whisper model by OpenAI.
## Intended uses & limitations
This is free to use for learning or commercial purposes. I don't plan to monetize this ever or make it private. My goal is to make whisper more localized which is why i have this trained this model and made it public for everyone.
## Training and evaluation data
This model is trained on [common_voice_17 dataset](https://commonvoice.mozilla.org/en/datasets). It is an open source multilingual dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.005 | 15.1515 | 1000 | 0.9955 | 53.7437 |
| 0.0003 | 30.3030 | 2000 | 1.1066 | 52.5698 |
| 0.0001 | 45.4545 | 3000 | 1.1585 | 52.8553 |
| 0.0001 | 60.6061 | 4000 | 1.1889 | 52.7284 |
| 0.0001 | 75.7576 | 5000 | 1.2013 | 52.6332 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Nabokov/gemma-2-Ifable-9B-Q4_K_S-GGUF
|
Nabokov
| 2024-09-12T16:05:37Z | 6 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ifable/gemma-2-Ifable-9B",
"base_model:quantized:ifable/gemma-2-Ifable-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-12T16:05:11Z |
---
base_model: ifable/gemma-2-Ifable-9B
tags:
- llama-cpp
- gguf-my-repo
---
# Nabokov/gemma-2-Ifable-9B-Q4_K_S-GGUF
This model was converted to GGUF format from [`ifable/gemma-2-Ifable-9B`](https://huggingface.co/ifable/gemma-2-Ifable-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ifable/gemma-2-Ifable-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nabokov/gemma-2-Ifable-9B-Q4_K_S-GGUF --hf-file gemma-2-ifable-9b-q4_k_s-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nabokov/gemma-2-Ifable-9B-Q4_K_S-GGUF --hf-file gemma-2-ifable-9b-q4_k_s-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nabokov/gemma-2-Ifable-9B-Q4_K_S-GGUF --hf-file gemma-2-ifable-9b-q4_k_s-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nabokov/gemma-2-Ifable-9B-Q4_K_S-GGUF --hf-file gemma-2-ifable-9b-q4_k_s-imat.gguf -c 2048
```
|
Dunateo/roberta-cwe-classifier-kelemia
|
Dunateo
| 2024-09-12T16:04:55Z | 14 | 2 | null |
[
"safetensors",
"roberta",
"text-classification",
"bert",
"CWE",
"security",
"en",
"dataset:Dunateo/VulnDesc_CWE_Mapping",
"license:mit",
"region:us"
] |
text-classification
| 2024-08-21T22:02:22Z |
---
language: en
license: mit
tags:
- text-classification
- bert
- roberta
- CWE
- security
datasets:
- Dunateo/VulnDesc_CWE_Mapping
metrics:
- loss
---
# Kelemia for CWE Classification
This model is a fine-tuned version of RoBERTa for classifying Common Weakness Enumeration (CWE) vulnerabilities.
Try now the v0.2 : [Dunateo/roberta-cwe-classifier-kelemia-v0.2](https://huggingface.co/Dunateo/roberta-cwe-classifier-kelemia-v0.2)
## Model description
- **Model type:** RoBERTa
- **Language(s):** English
- **License:** MIT
- **Finetuned from model:** [roberta-base](https://huggingface.co/roberta-base)
## Intended uses & limitations
This model is intended for classifying software vulnerabilities according to the CWE standard. It should be used as part of a broader security analysis process and not as a standalone solution for identifying vulnerabilities.
## Training and evaluation data
[Dunateo/VulnDesc_CWE_Mapping](https://huggingface.co/datasets/Dunateo/VulnDesc_CWE_Mapping)
# Example Usage
Here's an example of how to use this model for inference:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "Dunateo/roberta-cwe-classifier-kelemia"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
# Prepare input text
text = "The application stores sensitive user data in plaintext."
# Tokenize and prepare input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
# Perform inference
with torch.no_grad():
outputs = model(**inputs)
# Get prediction
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(probabilities, dim=-1).item()
print(f"Predicted CWE class: {predicted_class}")
print(f"Confidence: {probabilities[predicted_class].item():.4f}")
```
## Label Dictionary
This model uses the following mapping for CWE classes:
```json
{
"0": "CWE-79",
"1": "CWE-89",
...
}
```
```python
import json
from huggingface_hub import hf_hub_download
label_dict_file = hf_hub_download(repo_id="Dunateo/roberta-cwe-classifier-kelemia", filename="label_dict.json")
with open(label_dict_file, 'r') as f:
label_dict = json.load(f)
id2label = {v: k for k, v in label_dict.items()}
print(f"Label : {id2label[predicted_class]}")
```
# Now you can use label_dict to map prediction indices to CWE classes
## Training procedure
### Training hyperparameters
- **Number of epochs:** 3
- **Learning rate:** Scheduled from 1e-06 to 3.9e-05
- **Batch size:** 8
- **Weight decay:** 0.01
- **Learning rate scheduler:** 5e-5
### Training results
- **Training Loss:** 4.201853184822278 (final)
- **Validation Loss:** 2.821094036102295 (final)
- **Training Time:** 5893.2502 seconds (approximately 1 hour 38 minutes)
- **Samples per Second:** 1.059
- **Steps per Second:** 0.066
#### Loss progression
| Epoch | Training Loss | Validation Loss |
|-------|---------------|-----------------|
| 1.0 | 4.822 | 4.639444828 |
| 2.0 | 3.6549 | 3.355055332 |
| 3.0 | 3.0617 | 2.821094036 |
## Evaluation results
The model shows consistent improvement over the training period:
- **Initial Training Loss:** 5.5987
- **Final Training Loss:** 3.0617
- **Initial Validation Loss:** 4.639444828
- **Final Validation Loss:** 2.821094036
### Performance analysis
- The model demonstrates a steady decrease in both training and validation loss, indicating good learning progress.
- The final validation loss (2.82) being lower than the final training loss (3.06) suggests that the model generalizes well to unseen data.
- There were two instances of gradient explosion (grad_norm of 603089.0625 and 68246.296875) early in training, but the model recovered and stabilized.
## Ethical considerations
This model should be used responsibly as part of a comprehensive security strategy. It should not be relied upon as the sole method for identifying or classifying vulnerabilities. False positives and negatives are possible, and results should be verified by security professionals.
## Additional information
For more details on the CWE standard, please visit [Common Weakness Enumeration](https://cwe.mitre.org/).
My report on this : [Fine-tuning blogpost](https://dunateo.github.io/posts/fine-tuning/).
|
QuantFactory/Romulus-cpt-Llama-3.1-8B-v0.1-GGUF
|
QuantFactory
| 2024-09-12T16:02:09Z | 116 | 1 |
transformers
|
[
"transformers",
"gguf",
"law",
"droit",
"unsloth",
"trl",
"sft",
"llama",
"text-generation",
"fr",
"dataset:louisbrulenaudet/Romulus-cpt-fr",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T15:21:15Z |
---
datasets:
- louisbrulenaudet/Romulus-cpt-fr
license: llama3
language:
- fr
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- law
- droit
- unsloth
- trl
- transformers
- sft
- llama
---
[](https://hf.co/QuantFactory)
# QuantFactory/Romulus-cpt-Llama-3.1-8B-v0.1-GGUF
This is quantized version of [louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1](https://huggingface.co/louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1) created using llama.cpp
# Original Model Card
<img src="assets/thumbnail.webp">
# Romulus, continually pre-trained models for French law.
Romulus is a series of continually pre-trained models enriched in French law and intended to serve as the basis for a fine-tuning process on labeled data. Please note that these models have not been aligned for the production of usable text as they stand, and will certainly need to be fine-tuned for the desired tasks in order to produce satisfactory results.
The training corpus is made up of around 34,864,949 tokens (calculated with the meta-llama/Meta-Llama-3.1-8B tokenizer).
## Hyperparameters
The following table outlines the key hyperparameters used for training Romulus.
| **Parameter** | **Description** | **Value** |
|----------------------------------|-----------------------------------------------------------------|-----------------------------|
| `max_seq_length` | Maximum sequence length for the model | 4096 |
| `load_in_4bit` | Whether to load the model in 4-bit precision | False |
| `model_name` | Pre-trained model name from Hugging Face | meta-llama/Meta-Llama-3.1-8B|
| `r` | Rank of the LoRA adapter | 128 |
| `lora_alpha` | Alpha value for the LoRA module | 32 |
| `lora_dropout` | Dropout rate for LoRA layers | 0 |
| `bias` | Bias type for LoRA adapters | none |
| `use_gradient_checkpointing` | Whether to use gradient checkpointing | unsloth |
| `train_batch_size` | Per device training batch size | 8 |
| `gradient_accumulation_steps` | Number of gradient accumulation steps | 8 |
| `warmup_ratio` | Warmup steps as a fraction of total steps | 0.1 |
| `num_train_epochs` | Number of training epochs | 1 |
| `learning_rate` | Learning rate for the model | 5e-5 |
| `embedding_learning_rate` | Learning rate for embeddings | 1e-5 |
| `optim` | Optimizer used for training | adamw_8bit |
| `weight_decay` | Weight decay to prevent overfitting | 0.01 |
| `lr_scheduler_type` | Type of learning rate scheduler | linear |
# Training script
Romulus was trained using Unsloth on a Nvidia H100 Azure EST US instance provided by the Microsoft for Startups program from this script:
```python
# -*- coding: utf-8 -*-
import os
from typing import (
Dict,
)
from datasets import load_dataset
from unsloth import (
FastLanguageModel,
is_bfloat16_supported,
UnslothTrainer,
UnslothTrainingArguments,
)
max_seq_length = 4096
dtype = None
load_in_4bit = False
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="meta-llama/Meta-Llama-3.1-8B",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
token="hf_token",
)
model = FastLanguageModel.get_peft_model(
model,
r=128,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
"embed_tokens",
"lm_head",
],
lora_alpha=32,
lora_dropout=0,
bias="none",
use_gradient_checkpointing="unsloth",
random_state=3407,
use_rslora=True,
loftq_config=None,
)
prompt = """### Référence :
{}
### Contenu :
{}"""
EOS_TOKEN = tokenizer.eos_token
def formatting_prompts_func(examples):
"""
Format input examples into prompts for a language model.
This function takes a dictionary of examples containing titles and texts,
combines them into formatted prompts, and appends an end-of-sequence token.
Parameters
----------
examples : dict
A dictionary containing two keys:
- 'title': A list of titles.
- 'text': A list of corresponding text content.
Returns
-------
dict
A dictionary with a single key 'text', containing a list of formatted prompts.
Notes
-----
- The function assumes the existence of a global `prompt` variable, which is a
formatting string used to combine the title and text.
- The function also assumes the existence of a global `EOS_TOKEN` variable,
which is appended to the end of each formatted prompt.
- The input lists 'title' and 'text' are expected to have the same length.
Examples
--------
>>> examples = {
... 'title': ['Title 1', 'Title 2'],
... 'text': ['Content 1', 'Content 2']
... }
>>> formatting_cpt_prompts_func(examples)
{'text': ['<formatted_prompt_1><EOS>', '<formatted_prompt_2><EOS>']}
"""
refs = examples["ref"]
texts = examples["texte"]
outputs = []
for ref, text in zip(refs, texts):
text = prompt.format(ref, text) + EOS_TOKEN
outputs.append(text)
return {
"text": outputs,
}
cpt_dataset = load_dataset(
"louisbrulenaudet/Romulus-cpt-fr",
split="train",
token="hf_token",
)
cpt_dataset = cpt_dataset.map(
formatting_prompts_func,
batched=True,
)
trainer = UnslothTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=cpt_dataset,
dataset_text_field="text",
max_seq_length=max_seq_length,
dataset_num_proc=2,
args=UnslothTrainingArguments(
per_device_train_batch_size=8,
gradient_accumulation_steps=8,
warmup_ratio=0.1,
num_train_epochs=1,
learning_rate=5e-5,
embedding_learning_rate=1e-5,
fp16=not is_bfloat16_supported(),
bf16=is_bfloat16_supported(),
logging_steps=1,
report_to="wandb",
save_steps=350,
run_name="romulus-cpt",
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
output_dir="outputs",
),
)
trainer_stats = trainer.train()
```
<img src="assets/loss.png">
## Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Romulus, continually pre-trained models for French law},
year = {2024}
howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/Romulus-cpt-fr}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
|
fofr/flux-black-sclera
|
fofr
| 2024-09-12T15:56:12Z | 6 | 5 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-12T15:47:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
widget:
- text: >-
a closeup portrait BLKSCLRA concept art of a fantasy unreal engine scene depicting a warrior woman with black sclera eyes, illustration of her ready for battle against an epic landscape
output:
url: https://replicate.delivery/yhqm/6m7ekZQi4YwOeEJDUlH49zWvZ9HINMMHiOdlrei7LaOkY13mA/out-0.webp
- text: >-
an illustration of a BLKSCLRA woman with black sclera eyes
output:
url: https://replicate.delivery/yhqm/oxTiPocZi2aiNVytXFOxJs0wpz6c24hjrNeAo3ekGJPbfX4mA/out-0.webp
instance_prompt: BLKSCLRA
---
# Flux Black Sclera
Sclera is the white of an eye.
<Gallery />
Run on Replicate:
https://replicate.com/fofr/flux-black-sclera
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BLKSCLRA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fofr/flux-black-sclera', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
MHGanainy/gpt2-xl-lora-ecthr-random-balanced-cluster-8-id-5
|
MHGanainy
| 2024-09-12T15:52:26Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2-xl",
"base_model:adapter:openai-community/gpt2-xl",
"license:mit",
"region:us"
] | null | 2024-09-12T15:37:23Z |
---
base_model: openai-community/gpt2-xl
library_name: peft
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-lora-ecthr-random-balanced-cluster-8-id-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-lora-ecthr-random-balanced-cluster-8-id-5
This model is a fine-tuned version of [openai-community/gpt2-xl](https://huggingface.co/openai-community/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
theprint/CleverBoi-Nemo-12B
|
theprint
| 2024-09-12T15:52:23Z | 169 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"cleverboi",
"theprint",
"text2text-generation",
"en",
"dataset:theprint/CleverBoi",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text2text-generation
| 2024-09-12T10:52:58Z |
---
base_model: unsloth/mistral-nemo-instruct-2407-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- cleverboi
- theprint
datasets:
- theprint/CleverBoi
pipeline_tag: text2text-generation
---
<img src="https://huggingface.co/theprint/CleverBoi-Gemma-2-9B/resolve/main/cleverboi.png"/>
# CleverBoi
The CleverBoi series is based on models that have been fine tuned on a collection of data sets that emphasize logic, inference, math and coding, also known as the CleverBoi data set.
## Prompt Format
Use the **Alpaca** prompt template format with this model. For better performance, add this additional stop string:
```### input:\n```
# Uploaded Model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
itskamo-com/whisper-small-sepedi-v2
|
itskamo-com
| 2024-09-12T15:37:44Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-04T00:23:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step15000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-09-12T15:35:22Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-12T15:28:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/datagemma-rig-27b-it-GGUF
|
bartowski
| 2024-09-12T15:24:48Z | 111 | 0 |
transformers
|
[
"transformers",
"gguf",
"conversational",
"text-generation",
"base_model:google/datagemma-rig-27b-it",
"base_model:quantized:google/datagemma-rig-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix"
] |
text-generation
| 2024-09-12T13:33:00Z |
---
base_model: google/datagemma-rig-27b-it
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
quantized_by: bartowski
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
## Llamacpp imatrix Quantizations of datagemma-rig-27b-it
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3715">b3715</a> for quantization.
Original model: https://huggingface.co/google/datagemma-rig-27b-it
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model
```
Note that this model does not support a System prompt.
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [datagemma-rig-27b-it-f16.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/tree/main/datagemma-rig-27b-it-f16) | f16 | 54.46GB | true | Full F16 weights. |
| [datagemma-rig-27b-it-Q8_0.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q8_0.gguf) | Q8_0 | 28.94GB | false | Extremely high quality, generally unneeded but max available quant. |
| [datagemma-rig-27b-it-Q6_K_L.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q6_K_L.gguf) | Q6_K_L | 22.63GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [datagemma-rig-27b-it-Q6_K.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q6_K.gguf) | Q6_K | 22.34GB | false | Very high quality, near perfect, *recommended*. |
| [datagemma-rig-27b-it-Q5_K_L.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q5_K_L.gguf) | Q5_K_L | 19.69GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [datagemma-rig-27b-it-Q5_K_M.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q5_K_M.gguf) | Q5_K_M | 19.41GB | false | High quality, *recommended*. |
| [datagemma-rig-27b-it-Q5_K_S.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q5_K_S.gguf) | Q5_K_S | 18.88GB | false | High quality, *recommended*. |
| [datagemma-rig-27b-it-Q4_K_L.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_K_L.gguf) | Q4_K_L | 16.93GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [datagemma-rig-27b-it-Q4_K_M.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_K_M.gguf) | Q4_K_M | 16.65GB | false | Good quality, default size for must use cases, *recommended*. |
| [datagemma-rig-27b-it-Q4_K_S.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_K_S.gguf) | Q4_K_S | 15.74GB | false | Slightly lower quality with more space savings, *recommended*. |
| [datagemma-rig-27b-it-Q4_0.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_0.gguf) | Q4_0 | 15.68GB | false | Legacy format, generally not worth using over similarly sized formats |
| [datagemma-rig-27b-it-Q4_0_8_8.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_0_8_8.gguf) | Q4_0_8_8 | 15.63GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [datagemma-rig-27b-it-Q4_0_4_8.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_0_4_8.gguf) | Q4_0_4_8 | 15.63GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [datagemma-rig-27b-it-Q4_0_4_4.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q4_0_4_4.gguf) | Q4_0_4_4 | 15.63GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [datagemma-rig-27b-it-IQ4_XS.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-IQ4_XS.gguf) | IQ4_XS | 14.81GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [datagemma-rig-27b-it-Q3_K_XL.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q3_K_XL.gguf) | Q3_K_XL | 14.81GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [datagemma-rig-27b-it-Q3_K_L.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q3_K_L.gguf) | Q3_K_L | 14.52GB | false | Lower quality but usable, good for low RAM availability. |
| [datagemma-rig-27b-it-Q3_K_M.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q3_K_M.gguf) | Q3_K_M | 13.42GB | false | Low quality. |
| [datagemma-rig-27b-it-IQ3_M.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-IQ3_M.gguf) | IQ3_M | 12.45GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [datagemma-rig-27b-it-Q3_K_S.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q3_K_S.gguf) | Q3_K_S | 12.17GB | false | Low quality, not recommended. |
| [datagemma-rig-27b-it-IQ3_XS.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-IQ3_XS.gguf) | IQ3_XS | 11.55GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [datagemma-rig-27b-it-Q2_K_L.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q2_K_L.gguf) | Q2_K_L | 10.74GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [datagemma-rig-27b-it-Q2_K.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-Q2_K.gguf) | Q2_K | 10.45GB | false | Very low quality but surprisingly usable. |
| [datagemma-rig-27b-it-IQ2_M.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-IQ2_M.gguf) | IQ2_M | 9.40GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [datagemma-rig-27b-it-IQ2_XXS.gguf](https://huggingface.co/bartowski/datagemma-rig-27b-it-GGUF/blob/main/datagemma-rig-27b-it-IQ2_XXS.gguf) | IQ2_XXS | 7.63GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/datagemma-rig-27b-it-GGUF --include "datagemma-rig-27b-it-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/datagemma-rig-27b-it-GGUF --include "datagemma-rig-27b-it-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (datagemma-rig-27b-it-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Thienpkae/wav2vec2-vivos-asr
|
Thienpkae
| 2024-09-12T15:23:45Z | 19 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:vivos",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-07-31T14:11:53Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- vivos
metrics:
- wer
model-index:
- name: wav2vec2-vivos-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: vivos
type: vivos
config: default
split: None
args: default
metrics:
- name: Wer
type: wer
value: 0.4232335172051484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-vivos-asr
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the vivos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6926
- Wer: 0.4232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.3715 | 2.0 | 146 | 3.6727 | 1.0 |
| 3.4482 | 4.0 | 292 | 3.5947 | 1.0 |
| 3.4187 | 6.0 | 438 | 3.5349 | 1.0 |
| 3.3922 | 8.0 | 584 | 3.4713 | 1.0 |
| 3.349 | 10.0 | 730 | 3.3434 | 1.0 |
| 2.1445 | 12.0 | 876 | 1.3684 | 0.7849 |
| 1.0296 | 14.0 | 1022 | 0.9135 | 0.5588 |
| 0.7796 | 16.0 | 1168 | 0.7838 | 0.4871 |
| 0.609 | 18.0 | 1314 | 0.7060 | 0.4372 |
| 0.5388 | 20.0 | 1460 | 0.6926 | 0.4232 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
FabioTiroli/LaminiFT
|
FabioTiroli
| 2024-09-12T14:59:22Z | 177 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:lamini/lamini_docs_finetuned",
"base_model:finetune:lamini/lamini_docs_finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T14:38:57Z |
---
library_name: transformers
license: apache-2.0
base_model: lamini/lamini_docs_finetuned
tags:
- generated_from_trainer
model-index:
- name: LaminiFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LaminiFT
This model is a fine-tuned version of [lamini/lamini_docs_finetuned](https://huggingface.co/lamini/lamini_docs_finetuned) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 2.21.0
- Tokenizers 0.19.1
|
BeingUs/model4
|
BeingUs
| 2024-09-12T14:56:49Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-12T14:33:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Pulcheria Sumia : [email protected]]
- **Funded by [optional]:** [eGA Tanzania]
- **Shared by [optional]:** [Pulcheria]
- **Model type:** [text generation model]
- **Language(s) (NLP):** [Python]
- **License:** [Meta]
- **Finetuned from model [optional]:** [Jaccaranda/Ulizallama3 model]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- 4bits Quantized Model that uses Swahili language in generating text. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf
|
RichardErkhov
| 2024-09-12T14:55:16Z | 34 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-09-12T10:13:37Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-hf-gpt-4-80k - GGUF
- Model creator: https://huggingface.co/JCX-kcuf/
- Original model: https://huggingface.co/JCX-kcuf/Llama-2-7b-hf-gpt-4-80k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-hf-gpt-4-80k.Q2_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-hf-gpt-4-80k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-hf-gpt-4-80k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-hf-gpt-4-80k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-hf-gpt-4-80k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-hf-gpt-4-80k.Q3_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-hf-gpt-4-80k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-hf-gpt-4-80k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-hf-gpt-4-80k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-hf-gpt-4-80k.Q4_0.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-hf-gpt-4-80k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-hf-gpt-4-80k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-hf-gpt-4-80k.Q4_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-hf-gpt-4-80k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-hf-gpt-4-80k.Q4_1.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-hf-gpt-4-80k.Q5_0.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-hf-gpt-4-80k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-hf-gpt-4-80k.Q5_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-hf-gpt-4-80k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-hf-gpt-4-80k.Q5_1.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-hf-gpt-4-80k.Q6_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7b-hf-gpt-4-80k.Q8_0.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-hf-gpt-4-80k-gguf/blob/main/Llama-2-7b-hf-gpt-4-80k.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from GPT-4.
The base model is meta-llama/Llama-2-7b-hf
## Usage
The model has a query format as in llama-2.
```
<s> [INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{query} [/INST]
```
|
xverse/XVERSE-MoE-A36B
|
xverse
| 2024-09-12T14:53:09Z | 12 | 13 | null |
[
"safetensors",
"xverse",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2024-09-10T12:52:57Z |
---
license: apache-2.0
inference: false
---
# XVERSE-MoE-A36B
## 更新信息
- **[2024/09/13]** 发布 MoE 架构的 **XVERSE-MoE-A36B** 底座模型,Chat 对齐模型将在后续发布。
## Update Information
- **[2024/09/13]** Released **XVERSE-MoE-A36B** MoE base model, the Chat version model will be released later.
## 模型介绍
**XVERSE-MoE-A36B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),使用混合专家模型(MoE,Mixture-of-experts)架构,模型的总参数规模为 2554 亿,实际激活的参数量为 360 亿,本次开源的模型为底座模型 **XVERSE-MoE-A36B**,主要特点如下:
- **模型结构**:XVERSE-MoE-A36B 为 Decoder-only 的 Transformer 架构,将密集模型的 FFN 层扩展为专家层,不同于传统 MoE 中每个专家的大小与标准 FFN 相同(如Mixtral 8x7B ),使用了更细粒度的专家,每个专家是标准 FFN 大小的 1/4,并设置了共享专家(Shared Expert)和非共享专家(Non-shared Expert)两类,共享专家在计算时始终被激活,非共享专家通过 Router 选择性激活。
- **训练数据**:构建了海量高质量、多样化的数据对模型进行充分训练,包含中、英、俄、西等 40 多种语言,通过精细化设置不同类型数据的采样比例,使得中英两种语言表现优异,也能兼顾其他语言效果;模型使用 8K 长度的训练样本进行训练;在模型训练过程中进行了若干次数据的切换,来动态的引入持续处理的高质量数据,同时伴随数据采样比的调整。
- **训练策略**:在切换数据的同时,为了使模型对新进数据进行快速且充分的学习,对学习率调度器也进行了相应调整。
- **训练框架**:针对 MoE 模型中独有的专家路由和权重计算逻辑,进行了深入定制优化,开发出一套高效的融合算子,以提升计算效率。同时,为解决 MoE 模型显存占用和通信量大的挑战,设计了计算、通信和 CPU-Offload 的 Overlap 处理方式,从而提高整体吞吐量。
**XVERSE-MoE-A36B** 的模型大小、架构和学习率如下:
| total params | activated params | n_layers | d_model | n_heads | d_ff | n_non_shared_experts | n_shared_experts | top_k | lr |
| :----------: | :--------------: | :------: | :-----: | :-----: | :--: | :------------------: | :--------------: | :---: | :----: |
| 255.4B | 36.5B | 50 | 6144 | 48 | 4096 | 64 | 2 | 6 | 2.5e−4 |
## Model Introduction
**XVERSE-MoE-A36B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology which is using Mixture-of-experts (MoE) architecture. The total parameter scale of the model is 255 billion, with an actual number of activated parameters being 36 billion. The models released this time is the base model **XVERSE-MoE-A36B**. Its key features are as follows:
- **Model Structure**: XVERSE-MoE-A36B uses the mainstream Decoder-only Transformer network structure that extends the FFN layer of dense models to expert layers. Unlike traditional MoE model where each expert has the same size as standard FFN (such as Mixtral 8x7B), it uses more fine-grained experts, with each expert being 1/4 the size of a standard FFN. It includes shared experts and non-shared experts, where shared experts are always activated during computation, and non-shared experts are selectively activated through a Router.
- **Training Data**: The model has been thoroughly trained on a large-scale high-quality dataset, including more than 40 languages such as Chinese, English, Russian, and Spanish. The sampling ratio of different types of data is finely set, which makes the performance of Chinese and English excellent, and also takes into account the effect of other languages; The model is trained using training samples of length 8k; During the model training process, several data switches were made to dynamically introduce continuously processed high-quality data, along with adjustments to the data sampling ratio.
- **Training Strategy**: While switching data, corresponding adjustments were also made to the learning rate scheduler to ensure the model could quickly and thoroughly learn from the newly introduced data.
- **Training Framework**: We conducted in-depth customized optimization for the unique expert routing and weight calculation logic in the MoE model, developed an efficient fusion operator to improve computational efficiency. At the same time, to address the challenges of high memory consumption and communication volume in the MoE model, we designed a processing method for overlapping computation, communication, and CPU-Offload to increase overall throughput.
The models sizes, architectures and learning rate of **XVERSE-MoE-A36B** are showed as follows:
| total params | activated params | n_layers | d_model | n_heads | d_ff | n_non_shared_experts | n_shared_experts | top_k | lr |
| :----------: | :--------------: | :------: | :-----: | :-----: | :--: | :------------------: | :--------------: | :---: | :----: |
| 255.4B | 36.5B | 50 | 6144 | 48 | 4096 | 64 | 2 | 6 | 2.5e−4 |
## 评测结果
为了综合评估模型的性能,我们在一系列标准数据集上进行了全面测试,包括MMLU、C-Eval、CMMLU、RACE-M、PIQA、GSM8K、MATH、MBPP和HumanEval,这些评估数据集覆盖了模型在多个领域的能力。并与相近参数规模的开源MoE模型进行了对比,结果如下:
**对比开源 Base 模型 - MoE**
| | XVERSE-MoE-A36B | Grok-1-A85B | DeepSeek-V2-A21B | Skywork-MoE-A22B | Mixtral-8x22B-A39B | DBRX-A36B |
| :----------: | :-------------: | :---------: | :--------------: | :--------------: | :----------------: | :-------: |
| Total Params | 255B | 314B | 236B | 146B | 141B | 132B |
| MMLU | **80.8** | 73 | 78.5 | 77.4 | 77.8 | 73.7 |
| C-Eval | 79.5 | - | 81.7 | 82.2 | 56.8 | 44.9 |
| CMMLU | 81.7 | - | 84 | 79.5 | 59.9 | 61.3 |
| GSM8K | **89.5** | 62.9 | 79.2 | 76.1 | 82.3 | 70.7 |
| MATH | **53.3** | 23.9 | 43.6 | 31.9 | 34.1 | 25.6 |
| HumanEval | 51.8 | 63.2 | 48.8 | 43.9 | 45.1 | 46.3 |
| MBPP | 59.8 | - | 66.6 | - | 71.2 | 58 |
| PIQA | **84.8** | - | 83.7 | - | 84.1 | 84.5 |
| RACE-M | **88.4** | - | 73.1 | - | 85.7 | 55.9 |
**对比开源 Base 模型 - Dense**
| | XVERSE-MoE-A36B | XVERSE-65B-2 | Llama3.1-405B | Nemotron-4-340B | Qwen1.5-110B | Qwen2-72B | Qwen1.5-72B | Llama3.1-70B |
| :----------: | :-------------: | :----------: | :-----------: | :-------------: | :----------: | :-------: | :---------: | :----------: |
| Total Params | 255B | 65B | 405B | 340B | 110B | 72B | 72B | 70B |
| MMLU | 80.8 | 74.4 | 85.2 | 81.1 | 80.4 | 84.2 | 77.5 | 79.3 |
| C-Eval | 79.5 | 72.4 | - | - | 89.1 | 91 | 84.1 | - |
| CMMLU | 81.7 | 75.1 | - | - | 88.3 | 90.1 | 83.5 | - |
| GSM8K | **89.5** | 72.6 | 89 | - | 85.4 | 89.5 | 79.5 | 83.7 |
| MATH | 53.3 | 20.8 | 53.8 | - | 49.6 | 51.1 | 34.1 | 41.4 |
| HumanEval | 51.8 | 37.8 | 61 | 57.3 | 54.3 | 64.6 | 46.3 | 58.5 |
| MBPP | 59.8 | 40.6 | 73.4 | - | 70.9 | 76.9 | 66.9 | 66.2 |
| PIQA | 84.8 | 79.4 | 85.6 | - | | - | - | 83.8 |
| RACE-M | 88.4 | 90.7 | - | - | | - | - | - |
**对比闭源 Chat 模型**
| | XVERSE-MoE-A36B | GPT-4o | abab-6.5-20240415 | Step-2 | Baichuan3 | GLM-4 (0520) |
| :----------: | :-------------: | :----: | :---------------: | :----: | :-------: | :----------: |
| Total Params | 255B | - | 万亿 | 万亿 | 千亿 | - |
| MMLU | 80.8 | 88.7 | 78.7 | | 81.7 | 83.3 |
| C-Eval | 79.5 | - | - | - | - | - |
| CMMLU | 81.7 | - | - | - | 78.1 | - |
| GSM8K | 89.5 | - | 91.7 | 94 | 88.2 | 93.3 |
| MATH | 53.3 | 76.6 | 51.3 | 68.4 | 49.2 | 61.3 |
| HumanEval | 51.8 | 90.2 | 78 | 84.1 | 70.1 | 78.5 |
| MBPP | 59.8 | - | - | - | 68.2 | - |
| PIQA | 84.8 | - | - | - | - | - |
| RACE-M | 88.4 | - | - | - | - | - |
对于上述所有比较模型,我们汇报其官方结果与自测结果之间的最大值。
## Model Evaluation
To comprehensively assess the performance of the model, we conducted extensive testing across a range of standard datasets, including MMLU, C-Eval, CMMLU, RACE-M, PIQA, GSM8K, Math, MBPP and HumanEval. And compared it with open-source MoE models of similar parameter scale, the results are as follows:
**Comparison of Open-Weight Base Models - MoE**
| | XVERSE-MoE-A36B | Grok-1-A85B | DeepSeek-V2-A21B | Skywork-MoE-A22B | Mixtral-8x22B-A39B | DBRX-A36B |
| :----------: | :-------------: | :---------: | :--------------: | :--------------: | :----------------: | :-------: |
| Total Params | 255B | 314B | 236B | 146B | 141B | 132B |
| MMLU | **80.8** | 73 | 78.5 | 77.4 | 77.8 | 73.7 |
| C-Eval | 79.5 | - | 81.7 | 82.2 | 56.8 | 44.9 |
| CMMLU | 81.7 | - | 84 | 79.5 | 59.9 | 61.3 |
| GSM8K | **89.5** | 62.9 | 79.2 | 76.1 | 82.3 | 70.7 |
| MATH | **53.3** | 23.9 | 43.6 | 31.9 | 34.1 | 25.6 |
| HumanEval | 51.8 | 63.2 | 48.8 | 43.9 | 45.1 | 46.3 |
| MBPP | 59.8 | - | 66.6 | - | 71.2 | 58 |
| PIQA | **84.8** | - | 83.7 | - | 84.1 | 84.5 |
| RACE-M | **88.4** | - | 73.1 | - | 85.7 | 55.9 |
**Comparison of Open-Weight Base Models - Dense**
| | XVERSE-MoE-A36B | XVERSE-65B-2 | Llama3.1-405B | Nemotron-4-340B | Qwen1.5-110B | Qwen2-72B | Qwen1.5-72B | Llama3.1-70B |
| :----------: | :-------------: | :----------: | :-----------: | :-------------: | :----------: | :-------: | :---------: | :----------: |
| Total Params | 255B | 65B | 405B | 340B | 110B | 72B | 72B | 70B |
| MMLU | 80.8 | 74.4 | 85.2 | 81.1 | 80.4 | 84.2 | 77.5 | 79.3 |
| C-Eval | 79.5 | 72.4 | - | - | 89.1 | 91 | 84.1 | - |
| CMMLU | 81.7 | 75.1 | - | - | 88.3 | 90.1 | 83.5 | - |
| GSM8K | **89.5** | 72.6 | 89 | - | 85.4 | 89.5 | 79.5 | 83.7 |
| MATH | 53.3 | 20.8 | 53.8 | - | 49.6 | 51.1 | 34.1 | 41.4 |
| HumanEval | 51.8 | 37.8 | 61 | 57.3 | 54.3 | 64.6 | 46.3 | 58.5 |
| MBPP | 59.8 | 40.6 | 73.4 | - | 70.9 | 76.9 | 66.9 | 66.2 |
| PIQA | 84.8 | 79.4 | 85.6 | - | | - | - | 83.8 |
| RACE-M | 88.4 | 90.7 | - | - | | - | - | - |
**Comparison of Closed-Source Chat Models**
| | XVERSE-MoE-A36B | GPT-4o | abab-6.5-20240415 | Step-2 | Baichuan3 | GLM-4 (0520) |
| :----------: | :-------------: | :----: | :---------------: | :------------: | :-------------------: | :----------: |
| Total Params | 255B | - | Trillion scale | Trillion scale | Hundred billion scale | - |
| MMLU | 80.8 | 88.7 | 78.7 | | 81.7 | 83.3 |
| C-Eval | 79.5 | - | - | - | - | - |
| CMMLU | 81.7 | - | - | - | 78.1 | - |
| GSM8K | 89.5 | - | 91.7 | 94 | 88.2 | 93.3 |
| MATH | 53.3 | 76.6 | 51.3 | 68.4 | 49.2 | 61.3 |
| HumanEval | 51.8 | 90.2 | 78 | 84.1 | 70.1 | 78.5 |
| MBPP | 59.8 | - | - | - | 68.2 | - |
| PIQA | 84.8 | - | - | - | - | - |
| RACE-M | 88.4 | - | - | - | - | - |
For all the comparison models mentioned above, we report the maximum value between their official results and our self-evaluation results.
## 使用方法
### Transformers 加载方式
可通过以下代码加载 XVERSE-MoE-A36B 模型来进行推理:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xverse/XVERSE-MoE-A36B")
model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-MoE-A36B", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
model = model.eval()
inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
inputs = inputs.cuda()
generated_ids = model.generate(inputs, max_new_tokens=70, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.1)
print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True))
```
## Usage
### Loading with Transformers
The XVERSE-MoE-A36B model can be loaded for inference using the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xverse/XVERSE-MoE-A36B")
model = AutoModelForCausalLM.from_pretrained("xverse/XVERSE-MoE-A36B", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
model = model.eval()
inputs = tokenizer('北京的景点:故宫、天坛、万里长城等。\n深圳的景点:', return_tensors='pt').input_ids
inputs = inputs.cuda()
generated_ids = model.generate(inputs, max_new_tokens=70, eos_token_id=tokenizer.eos_token_id, repetition_penalty=1.1)
print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True))
```
## 局限性与免责申明
XVERSE-MoE-A36B 与其他所有 LLM 一样,在某些情况下可能会产生不准确、有偏见或其他令人反感的内容。因此,请谨慎使用模型生成的内容,请勿将生成的有害内容进行传播,在部署任何 XVERSE-MoE-A36B 的应用之前,开发人员应根据其具体应用对模型进行安全测试和调优。
我们强烈警告不要将 XVERSE-MoE-A36B 模型用于制造或传播有害信息,或进行任何可能损害公众、国家、社会安全或违反法规的活动。如果使用 XVERSE-MoE-A36B 模型产生任何问题,无论是数据安全问题、公共舆论风险,还是模型被误解、滥用、传播或不合规使用所引发的任何风险和问题,我们将不承担任何责任。
## 模型开源协议
使用本仓库的源码需要遵循 [Apache-2.0](https://github.com/xverse-ai/XVERSE-MoE-A36B/blob/main/LICENSE) 开源协议,使用 XVERSE-MoE-A36B 的模型权重则需要遵循[模型许可协议](https://github.com/xverse-ai/XVERSE-MoE-A36B/blob/main/MODEL_LICENSE.pdf)。
XVERSE-MoE-A36B 模型权重对学术研究**完全开放**,并且支持**免费商用**。如需申请商业许可证,请填写【[申请表](https://chat.xverse.cn/home/business.html)】,如有其他问题或合作,请联系 <[email protected]>。
## Limitations and Disclaimer
Like all other Large Language Models (LLMs), XVERSE-MoE-A36B may produce inaccurate, biased, or otherwise offensive content under certain circumstances. Therefore, please use the content generated by the model with caution and refrain from disseminating harmful content. Before deploying any application of XVERSE-MoE-A36B, developers should conduct safety tests and optimization of the model according to its specific application.
We strongly warn against the use of the XVERSE-MoE-A36B model for producing or spreading harmful information, or conducting any activities that might harm the public, national, or social security, or violate regulations. We assume no responsibility for any problems arising from the use of the XVERSE-MoE-A36B model, whether it be data security issues, public opinion risks, or any risks and issues caused by misunderstanding, misuse, dissemination, or non-compliance with the model.
## Open Source License
The use of the source code in this repository must follow the [Apache-2.0](https://github.com/xverse-ai/XVERSE-MoE-A36B/blob/main/LICENSE) open-source license, while the use of the model weights of XVERSE-MoE-A36B needs to adhere to the [Model License Agreement](https://github.com/xverse-ai/XVERSE-MoE-A36B/blob/main/MODEL_LICENSE.pdf).
The XVERSE-MoE-A36B model weights are **fully open** to academic research and support **free commercial use**. To apply for a commercial license, please fill in the [application form](https://chat.xverse.cn/home/business.html). For other questions or collaborations, please contact <[email protected]>.
|
Xu-Ouyang/pythia-6.9b-deduped-int8-step13000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-09-12T14:51:59Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-12T14:50:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Firebal-Llama-3.1-8B-R1-GGUF
|
mradermacher
| 2024-09-12T14:49:13Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:EpistemeAI2/Firebal-Llama-3.1-8B-R1",
"base_model:quantized:EpistemeAI2/Firebal-Llama-3.1-8B-R1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T14:21:26Z |
---
base_model: EpistemeAI2/Firebal-Llama-3.1-8B-R1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EpistemeAI2/Firebal-Llama-3.1-8B-R1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Firebal-Llama-3.1-8B-R1-GGUF/resolve/main/Firebal-Llama-3.1-8B-R1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DeusImperator/Lyra4-Gutenberg-12B_exl2_8bpw_max
|
DeusImperator
| 2024-09-12T14:46:35Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:Sao10K/MN-12B-Lyra-v4",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-09-12T14:17:17Z |
---
license: apache-2.0
library_name: transformers
base_model:
- Sao10K/MN-12B-Lyra-v4
datasets:
- jondurbin/gutenberg-dpo-v0.1
---
# Lyra4-Gutenberg-12B - EXL2 8bpw max
This is a 8bpw EXL2 quant of [nbeerbower/Lyra4-Gutenberg-12B](https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B)
This quant was made using exllamav2-0.2.1 with default dataset. I used a slightly modified quantization script to force use of highest bpw methods for all layers in the model (which is usually "1:8b_128g s4") to ensure max quality.
I also added a small fix in config file to set max default context at 128k as original Mistral-Nemo should have.
I tested this quant shortly in some random RPs (including ones over 8k context) and it seems to work fine.
## Prompt Templates
Uses ChatML or modified mistral format like mentioned in original Lyra v4. I tested it with ChatML.
### Original readme below
---
# Lyra4-Gutenberg-12B
[Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1).
### Method
ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs.
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html)
|
mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF
|
mradermacher
| 2024-09-12T14:46:29Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:HagalazAI/Elysia-Trismegistus-Mistral-7B",
"base_model:quantized:HagalazAI/Elysia-Trismegistus-Mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T14:21:27Z |
---
base_model: HagalazAI/Elysia-Trismegistus-Mistral-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/HagalazAI/Elysia-Trismegistus-Mistral-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Elysia-Trismegistus-Mistral-7B-GGUF/resolve/main/Elysia-Trismegistus-Mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf
|
RichardErkhov
| 2024-09-12T14:35:33Z | 145 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-12T09:54:15Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora - GGUF
- Model creator: https://huggingface.co/JCX-kcuf/
- Original model: https://huggingface.co/JCX-kcuf/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q2_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_0.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_1.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_0.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_1.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q6_K.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q8_0.gguf](https://huggingface.co/RichardErkhov/JCX-kcuf_-_Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora-gguf/blob/main/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from GPT-3.5.
The base model is meta-llama/Llama-2-7b-hf
## Usage
The model has a query format as in llama-2.
```
<s> [INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{query} [/INST]
```
|
FabioTiroli/lamini_docs_2_steps
|
FabioTiroli
| 2024-09-12T14:27:06Z | 177 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:lamini/lamini_docs_finetuned",
"base_model:finetune:lamini/lamini_docs_finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-11T23:40:30Z |
---
library_name: transformers
license: apache-2.0
base_model: lamini/lamini_docs_finetuned
tags:
- generated_from_trainer
model-index:
- name: lamini_docs_2_steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lamini_docs_2_steps
This model is a fine-tuned version of [lamini/lamini_docs_finetuned](https://huggingface.co/lamini/lamini_docs_finetuned) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 2
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 2.21.0
- Tokenizers 0.19.1
|
claudiubarbu/ppo
|
claudiubarbu
| 2024-09-12T14:20:15Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-01T13:14:18Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="claudiubarbu//tmp/tmpdmfjw943/claudiubarbu/HW2-ppo")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("claudiubarbu//tmp/tmpdmfjw943/claudiubarbu/HW2-ppo")
model = AutoModelForCausalLMWithValueHead.from_pretrained("claudiubarbu//tmp/tmpdmfjw943/claudiubarbu/HW2-ppo")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.