modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-05 12:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-05 12:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
princedl/ml6team-gpt2-small-german-finetune-oscar-peft | princedl | 2024-02-22T18:55:25Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:ml6team/gpt2-small-german-finetune-oscar",
"base_model:adapter:ml6team/gpt2-small-german-finetune-oscar",
"region:us"
] | null | 2024-02-22T18:31:07Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: ml6team/gpt2-small-german-finetune-oscar
model-index:
- name: ml6team-gpt2-small-german-finetune-oscar-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ml6team-gpt2-small-german-finetune-oscar-peft
This model is a fine-tuned version of [ml6team/gpt2-small-german-finetune-oscar](https://huggingface.co/ml6team/gpt2-small-german-finetune-oscar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.173 | 1.0 | 210 | 4.7734 |
| 4.6792 | 2.0 | 420 | 4.6458 |
| 4.5685 | 3.0 | 630 | 4.6042 |
| 4.2199 | 4.0 | 840 | 4.5872 |
| 4.7324 | 5.0 | 1050 | 4.5797 |
| 5.4576 | 6.0 | 1260 | 4.5772 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
joefox/tts_vits_ru_hf | joefox | 2024-02-22T18:51:52Z | 421 | 13 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"text-to-speech",
"ru",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-02-14T14:20:51Z | ---
language:
- ru
tags:
- vits
license: cc-by-nc-4.0
pipeline_tag: text-to-speech
widget:
- example_title: text to speech
text: >
прив+ет, как дел+а? всё +очень хорош+о! а у тебя как?
---
# VITS model Text to Speech Russian
The text accepts lowercase
Example Text to Speech
```python
from transformers import VitsModel, AutoTokenizer
import torch
import scipy
model = VitsModel.from_pretrained("joefox/tts_vits_ru_hf")
tokenizer = AutoTokenizer.from_pretrained("joefox/tts_vits_ru_hf")
text = "Привет, как дел+а? Всё +очень хорош+о! А у тебя как?"
text = text.lower()
inputs = tokenizer(text, return_tensors="pt")
inputs['speaker_id'] = 3
with torch.no_grad():
output = model(**inputs).waveform
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output[0].cpu().numpy())
```
For displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
##
## Languages covered
Russian (ru_RU)
|
nilq/baby-tokenizer | nilq | 2024-02-22T18:50:30Z | 0 | 1 | null | [
"babylm",
"tokenizer",
"en",
"dataset:nilq/babylm-100M",
"license:mit",
"region:us"
] | null | 2024-01-21T15:16:10Z | ---
license: mit
language:
- en
tags:
- babylm
- tokenizer
datasets:
- nilq/babylm-100M
---
## Baby Tokenizer
Compact sentencepiece tokenizer for sample-efficient English language modeling, simply tokenizing natural language.
### Usage
#### Transformers
```py
from transformers import AutoTokenizer
tokenizer_baby = AutoTokenizer.from_pretrained("nilq/baby-tokenizer")
```
#### Tokenizers
```py
from tokenizers import Tokenizer
tokenizer_baby = Tokenizer.from_pretrained("nilq/baby-tokenizer")
```
### Data
This tokeniser is derived from the BabyLM 100M dataset of mixed domain data, consisting of the following sources:
- CHILDES (child-directed speech)
- Subtitles (speech)
- BNC (speech)
- TED talks (speech)
- children's books (simple written language).
### Specifications
- Vocabulary size: 20k
- Alphabet limit: 150
- Minimum token frequency: 100 |
nilq/baby-tokenizer-uncased | nilq | 2024-02-22T18:50:15Z | 0 | 0 | null | [
"babylm",
"tokenizer",
"en",
"dataset:nilq/babylm-100M",
"license:mit",
"region:us"
] | null | 2024-02-22T18:48:38Z | ---
license: mit
language:
- en
tags:
- babylm
- tokenizer
datasets:
- nilq/babylm-100M
---
## Baby Tokenizer (Uncased)
Compact sentencepiece tokenizer for sample-efficient English language modeling, simply tokenizing natural language.
### Usage
#### Transformers
```py
from transformers import AutoTokenizer
tokenizer_baby = AutoTokenizer.from_pretrained("nilq/baby-tokenizer")
```
#### Tokenizers
```py
from tokenizers import Tokenizer
tokenizer_baby = Tokenizer.from_pretrained("nilq/baby-tokenizer")
```
### Data
This tokeniser is derived from the BabyLM 100M dataset of mixed domain data, consisting of the following sources:
- CHILDES (child-directed speech)
- Subtitles (speech)
- BNC (speech)
- TED talks (speech)
- children's books (simple written language).
### Specifications
- Vocabulary size: 20k
- Alphabet limit: 150
- Minimum token frequency: 100 |
VATSAL1729/huggy | VATSAL1729 | 2024-02-22T18:49:53Z | 37 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-02-22T18:48:51Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VATSAL1729/huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
stevethecur/layoutlm-funsd-tf | stevethecur | 2024-02-22T18:48:52Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-09-18T20:39:40Z | ---
license: mit
tags:
- generated_from_keras_callback
base_model: microsoft/layoutlm-base-uncased
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5937
- Validation Loss: 1.1902
- Train Overall Precision: 0.4751
- Train Overall Recall: 0.5850
- Train Overall F1: 0.5244
- Train Overall Accuracy: 0.6201
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.6536 | 1.4851 | 0.1737 | 0.3176 | 0.2245 | 0.4025 | 0 |
| 1.3258 | 1.2951 | 0.2957 | 0.4325 | 0.3513 | 0.4737 | 1 |
| 1.1768 | 1.1266 | 0.3614 | 0.4892 | 0.4157 | 0.5489 | 2 |
| 1.0113 | 1.0274 | 0.3889 | 0.5294 | 0.4484 | 0.6040 | 3 |
| 0.9157 | 1.0104 | 0.4428 | 0.5414 | 0.4871 | 0.6152 | 4 |
| 0.7484 | 1.0807 | 0.4742 | 0.5354 | 0.5029 | 0.6153 | 5 |
| 0.6791 | 1.2077 | 0.4709 | 0.5434 | 0.5045 | 0.6049 | 6 |
| 0.5937 | 1.1902 | 0.4751 | 0.5850 | 0.5244 | 0.6201 | 7 |
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
LiukG/mus_promoter-finetuned-lora-500m-1000g | LiukG | 2024-02-22T18:38:49Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:InstaDeepAI/nucleotide-transformer-500m-1000g",
"base_model:finetune:InstaDeepAI/nucleotide-transformer-500m-1000g",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T18:37:03Z | ---
license: cc-by-nc-sa-4.0
base_model: InstaDeepAI/nucleotide-transformer-500m-1000g
tags:
- generated_from_trainer
metrics:
- f1
- matthews_correlation
- accuracy
model-index:
- name: mus_promoter-finetuned-lora-500m-1000g
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mus_promoter-finetuned-lora-500m-1000g
This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-500m-1000g](https://huggingface.co/InstaDeepAI/nucleotide-transformer-500m-1000g) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2791
- F1: 0.9211
- Matthews Correlation: 0.8076
- Accuracy: 0.9062
- F1 Score: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Matthews Correlation | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------:|:--------:|:--------:|
| 0.6279 | 0.43 | 100 | 0.4652 | 0.8986 | 0.7910 | 0.8906 | 0.8986 |
| 0.3719 | 0.85 | 200 | 0.3562 | 0.9167 | 0.8113 | 0.9062 | 0.9167 |
| 0.3615 | 1.28 | 300 | 0.6468 | 0.8718 | 0.6790 | 0.8438 | 0.8718 |
| 0.3425 | 1.71 | 400 | 0.4302 | 0.8889 | 0.7210 | 0.8594 | 0.8889 |
| 0.3106 | 2.14 | 500 | 0.3645 | 0.9041 | 0.7773 | 0.8906 | 0.9041 |
| 0.3218 | 2.56 | 600 | 0.2542 | 0.9333 | 0.8395 | 0.9219 | 0.9333 |
| 0.2135 | 2.99 | 700 | 0.4137 | 0.9211 | 0.8076 | 0.9062 | 0.9211 |
| 0.2512 | 3.42 | 800 | 0.3547 | 0.9351 | 0.8414 | 0.9219 | 0.9351 |
| 0.1963 | 3.85 | 900 | 0.2171 | 0.9333 | 0.8395 | 0.9219 | 0.9333 |
| 0.1304 | 4.27 | 1000 | 0.2791 | 0.9211 | 0.8076 | 0.9062 | 0.9211 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
LoneStriker/opus-v1.2-7b-6.0bpw-h6-exl2 | LoneStriker | 2024-02-22T18:32:58Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"axolotl",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T18:30:34Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Prompting
[Read the full Opus V1 prompting guide](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can readily copy.
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/story_writing.webp" alt="story writing" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
Here's how you can prompt the model for the following tasks
- Steerable [Story-writing](https://dreamgen.com/docs/models/opus/v1#task-story-writing) and [Role-playing](https://dreamgen.com/docs/models/opus/v1#task-role-playing):
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
- [Story plot summarization](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
- [Story character description](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
- [Story style description](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
- [Story description to chapters](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
- And more...
### Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of the prompting (see above).
### Running Locally
- [Chat template from model config](tokenizer_config.json#L51)
- This uses "text" role instead of the typical "assistant" role, and it does not (can’t?) support names
- [LM Studio config](configs/lmstudio.json)
- This uses "text" role role as well
### Running on DreamGen.com (free)
You can try the model for free on [dreamgen.com](https://dreamgen.com) — note that an account is required.
## Community
Join the DreamGen community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
## License
- This model is intended for personal use only, other use is not permitted.
|
satyroffrost/triple-20e-1000-fit-all-mpnet-base-v2 | satyroffrost | 2024-02-22T18:26:19Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-22T13:54:44Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# satyroffrost/triple-20e-1000-fit-all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('satyroffrost/triple-20e-1000-fit-all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=satyroffrost/triple-20e-1000-fit-all-mpnet-base-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 125 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
LiukG/mus_promoter-finetuned-lora-500m-human-ref | LiukG | 2024-02-22T18:23:34Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:InstaDeepAI/nucleotide-transformer-500m-human-ref",
"base_model:finetune:InstaDeepAI/nucleotide-transformer-500m-human-ref",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T18:21:46Z | ---
license: cc-by-nc-sa-4.0
base_model: InstaDeepAI/nucleotide-transformer-500m-human-ref
tags:
- generated_from_trainer
metrics:
- f1
- matthews_correlation
- accuracy
model-index:
- name: mus_promoter-finetuned-lora-500m-human-ref
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mus_promoter-finetuned-lora-500m-human-ref
This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-500m-human-ref](https://huggingface.co/InstaDeepAI/nucleotide-transformer-500m-human-ref) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4605
- F1: 0.9444
- Matthews Correlation: 0.8749
- Accuracy: 0.9375
- F1 Score: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Matthews Correlation | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------:|:--------:|:--------:|
| 0.7975 | 0.43 | 100 | 0.3190 | 0.9231 | 0.8108 | 0.9062 | 0.9231 |
| 0.3818 | 0.85 | 200 | 0.2951 | 0.9167 | 0.8113 | 0.9062 | 0.9167 |
| 0.3829 | 1.28 | 300 | 0.5043 | 0.9 | 0.7507 | 0.875 | 0.9 |
| 0.2565 | 1.71 | 400 | 0.2655 | 0.9351 | 0.8414 | 0.9219 | 0.9351 |
| 0.2098 | 2.14 | 500 | 0.3518 | 0.9333 | 0.8395 | 0.9219 | 0.9333 |
| 0.1841 | 2.56 | 600 | 0.2601 | 0.9211 | 0.8076 | 0.9062 | 0.9211 |
| 0.0804 | 2.99 | 700 | 0.3953 | 0.9315 | 0.8411 | 0.9219 | 0.9315 |
| 0.0463 | 3.42 | 800 | 0.4732 | 0.9444 | 0.8749 | 0.9375 | 0.9444 |
| 0.057 | 3.85 | 900 | 0.4799 | 0.9444 | 0.8749 | 0.9375 | 0.9444 |
| 0.0144 | 4.27 | 1000 | 0.4605 | 0.9444 | 0.8749 | 0.9375 | 0.9444 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
sharren/vit-skin-demo-v1 | sharren | 2024-02-22T18:19:28Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-22T18:18:50Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-skin-demo-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-skin-demo-v1
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the skin-cancer dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4302
- Accuracy: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7377 | 0.31 | 100 | 0.7305 | 0.7553 |
| 0.8988 | 0.62 | 200 | 0.6799 | 0.7541 |
| 0.7157 | 0.93 | 300 | 0.6039 | 0.7772 |
| 0.5569 | 1.25 | 400 | 0.6506 | 0.7578 |
| 0.5342 | 1.56 | 500 | 0.5929 | 0.7846 |
| 0.6498 | 1.87 | 600 | 0.5553 | 0.7953 |
| 0.4956 | 2.18 | 700 | 0.5429 | 0.7921 |
| 0.5216 | 2.49 | 800 | 0.4704 | 0.8302 |
| 0.3468 | 2.8 | 900 | 0.4669 | 0.8327 |
| 0.4862 | 3.12 | 1000 | 0.4615 | 0.8421 |
| 0.4018 | 3.43 | 1100 | 0.4526 | 0.8458 |
| 0.302 | 3.74 | 1200 | 0.4302 | 0.8558 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
mrm8488/phi-2-coder | mrm8488 | 2024-02-22T18:18:43Z | 73 | 26 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"generated_from_trainer",
"code",
"coding",
"phi-2",
"phi2",
"mlx",
"custom_code",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"doi:10.57967/hf/1518",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-24T09:49:30Z | ---
tags:
- generated_from_trainer
- code
- coding
- phi-2
- phi2
- mlx
model-index:
- name: phi-2-coder
results: []
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- code
thumbnail: https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
library_name: transformers
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png" alt="phi-2 coder logo"">
</div>
# Phi-2 Coder 👩💻
**Phi-2** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
## Model description 🧠
[Phi-2](https://huggingface.co/microsoft/phi-2)
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
## Training and evaluation data 📚
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
### Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 66
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7631 | 0.36 | 50 | 0.7174 |
| 0.6735 | 0.71 | 100 | 0.6949 |
| 0.696 | 1.07 | 150 | 0.6893 |
| 0.7861 | 1.42 | 200 | 0.6875 |
| 0.7346 | 1.78 | 250 | 0.6867 |
### HumanEval results 📊
WIP
### Example of usage 👩💻
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mrm8488/phi-2-coder"
tokenizer = AutoTokenizer.from_pretrained(model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device="auto")
def generate(
instruction,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=2,
**kwargs,
):
prompt = "Instruct: " + instruction + "\nOutput:"
print(prompt)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_new_tokens=max_new_tokens,
eos_token_id = tokenizer.eos_token_id,
use_cache=True,
early_stopping=True
)
output = tokenizer.decode(generation_output[0])
return output.split("\nOutput:")[1].lstrip("\n")
instruction = "Design a class for representing a person in Python."
print(generate(instruction))
```
### How to use with [MLX](https://github.com/ml-explore/mlx).
```bash
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir phi-2-coder mrm8488/phi-2-coder
# Run example
python mlx-examples/llms/phi2.py --model-path phi-2-coder --prompt "Design a class for representing a person in Python"
```
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { phi-2-coder (Revision 4ae69ae) },
year = 2023,
url = { https://huggingface.co/mrm8488/phi-2-coder },
doi = { 10.57967/hf/1518 },
publisher = { Hugging Face }
}
``` |
macadeliccc/mixtral-instruct-0.1-laser-GGUF | macadeliccc | 2024-02-22T18:17:26Z | 2 | 1 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-22T17:04:18Z | ---
license: apache-2.0
---
Credit to Fernando Fernandes Neto.
Original [repo](https://huggingface.co/cognitivecomputations/mixtral-instruct-0.1-laser) |
rahuldshetty/gemma-7b-it-gguf-quantized | rahuldshetty | 2024-02-22T18:16:09Z | 19 | 16 | transformers | [
"transformers",
"gguf",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-02-21T15:31:29Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
GGUF Quantized version of [gemma-7b-it](https://huggingface.co/google/gemma-7b-it).
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [gemma-7b-it-Q4_K_M.gguf](https://huggingface.co/rahuldshetty/gemma-7b-it-gguf-quantized/blob/main/gemma-7b-it-Q4_K_M.gguf) | Q4_K_M | 4 | 5.13 GB | medium, balanced quality - recommended |
| [gemma-7b-it-Q8_0.gguf](https://huggingface.co/rahuldshetty/gemma-7b-it-gguf-quantized/blob/main/gemma-7b-it-Q8_0.gguf) | Q8_0 | 8 | 9.08 GB | very large, extremely low quality loss - not recommended |
# Gemma Model Card (Taken from Official HF Repo)
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
Kevinkrs/TrialLlama | Kevinkrs | 2024-02-22T18:08:47Z | 0 | 0 | null | [
"region:us"
] | null | 2023-09-28T10:58:56Z | # Loading model
This repository only contains the adapter weights from LoRA fine-tuning.
To load the model, the base model `Llama-2-13b-chat-hf` has to be loaded and used as base to load the adapter weights.
## Merging
The adapter weights can be merged with the base model. Since this takes much more space tough, only adapter folder was uploaded. If merging is required, please refer to the project repository or the llama-recepies repository by meta research labs (https://github.com/facebookresearch/llama-recipes) for examples.
|
arda1319/distilbert-base-uncased-finetuned-emotions | arda1319 | 2024-02-22T18:06:31Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T16:35:34Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9212419542732461
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2185
- Accuracy: 0.9215
- F1: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8126 | 1.0 | 250 | 0.3154 | 0.9035 | 0.9038 |
| 0.2459 | 2.0 | 500 | 0.2185 | 0.9215 | 0.9212 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
hythyt/poca-SoccerTwos | hythyt | 2024-02-22T17:53:50Z | 19 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-02-22T17:53:18Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hythyt/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
facebook/flava-full | facebook | 2024-02-22T17:51:43Z | 9,583 | 37 | transformers | [
"transformers",
"pytorch",
"flava",
"pretraining",
"arxiv:2112.04482",
"arxiv:2108.10904",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2022-04-09T00:40:12Z | ---
license: bsd-3-clause
---
## Model Card: FLAVA
## Model Details
FLAVA model was developed by the researchers at FAIR to understand if a single model can work across different modalities with a unified architecture. The model was pretrained solely using publicly available multimodal datasets containing 70M image-text pairs in total and thus fully reproducible. Unimodal datasets ImageNet and BookCorpus + CCNews were also used to provide unimodal data to the model. The model (i) similar to CLIP can be used for arbitrary image classification tasks in a zero-shot manner (ii) used for image or text retrieval in a zero-shot manner (iii) can also be fine-tuned for natural language understanding (NLU) tasks such as GLUE and vision-and-language reasoning tasks such as VQA v2. The model is able to use the data available as images, text corpus and image-text pairs. In the original paper, the authors evaluate FLAVA on 32 tasks from computer vision, NLU and vision-and-language domains and show impressive performance across the board scoring higher micro-average than CLIP while being open.
## Model Date
Model was originally released in November 2021.
## Model Type
The FLAVA model uses a ViT-B/32 transformer for both image encoder and text encoder. FLAVA also employs a multimodal encoder on top for multimodal tasks such as vision-and-language tasks (VQA) which is a 6-layer encoder. Each component of FLAVA model can be loaded individually from `facebook/flava-full` checkpoint. If you need complete heads used for pretraining, please use `FlavaForPreTraining` model class otherwise `FlavaModel` should suffice for most use case. This [repository](https://github.com/facebookresearch/multimodal/tree/main/examples/flava) also contains code to pretrain the FLAVA model from scratch.
## Documents
- [FLAVA Paper](https://arxiv.org/abs/2112.04482)
## Using with Transformers
### FlavaModel
FLAVA model supports vision, language and multimodal inputs. You can pass inputs corresponding to the domain you are concerned with to get losses and outputs related to that domain.
```py
from PIL import Image
import requests
from transformers import FlavaProcessor, FlavaModel
model = FlavaModel.from_pretrained("facebook/flava-full")
processor = FlavaProcessor.from_pretrained("facebook/flava-full")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=[image, image], return_tensors="pt", padding="max_length", max_length=77
)
outputs = model(**inputs)
image_embeddings = outputs.image_embeddings # Batch size X (Number of image patches + 1) x Hidden size => 2 X 197 X 768
text_embeddings = outputs.text_embeddings # Batch size X (Text sequence length + 1) X Hidden size => 2 X 77 X 768
multimodal_embeddings = outputs.multimodal_embeddings # Batch size X (Number of image patches + Text Sequence Length + 3) X Hidden size => 2 X 275 x 768
# Multimodal embeddings can be used for multimodal tasks such as VQA
## Pass only image
from transformers import FlavaFeatureExtractor
feature_extractor = FlavaFeatureExtractor.from_pretrained("facebook/flava-full")
inputs = feature_extractor(images=[image, image], return_tensors="pt")
outputs = model(**inputs)
image_embeddings = outputs.image_embeddings
## Pass only text
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("facebook/flava-full")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], return_tensors="pt", padding="max_length", max_length=77)
outputs = model(**inputs)
text_embeddings = outputs.text_embeddings
```
#### Encode Image
```py
from PIL import Image
import requests
from transformers import FlavaFeatureExtractor, FlavaModel
model = FlavaModel.from_pretrained("facebook/flava-full")
feature_extractor = FlavaFeatureExtractor.from_pretrained("facebook/flava-full")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=[image], return_tensors="pt")
image_embedding = model.get_image_features(**inputs)
```
#### Encode Text
```py
from PIL import Image
from transformers import BertTokenizer, FlavaModel
model = FlavaModel.from_pretrained("facebook/flava-full")
tokenizer = BertTokenizer.from_pretrained("facebook/flava-full")
inputs = tokenizer(text=["a photo of a dog"], return_tensors="pt", padding="max_length", max_length=77)
text_embedding = model.get_text_features(**inputs)
```
### FlavaForPreTraining
FLAVA model supports vision, language and multimodal inputs. You can pass corresponding inputs to modality to get losses and outputs related to that domain.
```py
from PIL import Image
import requests
from transformers import FlavaProcessor, FlavaForPreTraining
model = FlavaForPreTraining.from_pretrained("facebook/flava-full")
processor = FlavaProcessor.from_pretrained("facebook/flava-full")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"],
images=[image, image],
return_tensors="pt",
padding="max_length",
max_length=77,
return_codebook_pixels=True,
return_image_mask=True,
# Other things such as mlm_labels, itm_labels can be passed here. See docs
)
inputs.bool_masked_pos.zero_()
outputs = model(**inputs)
image_embeddings = outputs.image_embeddings # Batch size X (Number of image patches + 1) x Hidden size => 2 X 197 X 768
text_embeddings = outputs.text_embeddings # Batch size X (Text sequence length + 1) X Hidden size => 2 X 77 X 768
# Multimodal embeddings can be used for multimodal tasks such as VQA
multimodal_embeddings = outputs.multimodal_embeddings # Batch size X (Number of image patches + Text Sequence Length + 3) X Hidden size => 2 X 275 x 768
# Loss
loss = outputs.loss # probably NaN due to missing labels
# Global contrastive loss logits
image_contrastive_logits = outputs.contrastive_logits_per_image
text_contrastive_logits = outputs.contrastive_logits_per_text
# ITM logits
itm_logits = outputs.itm_logits
```
### FlavaImageModel
```py
from PIL import Image
import requests
from transformers import FlavaFeatureExtractor, FlavaImageModel
model = FlavaImageModel.from_pretrained("facebook/flava-full")
feature_extractor = FlavaFeatureExtractor.from_pretrained("facebook/flava-full")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=[image], return_tensors="pt")
outputs = model(**inputs)
image_embeddings = outputs.last_hidden_state
```
### FlavaTextModel
```py
from PIL import Image
from transformers import BertTokenizer, FlavaTextModel
model = FlavaTextModel.from_pretrained("facebook/flava-full")
tokenizer = BertTokenizer.from_pretrained("facebook/flava-full")
inputs = tokenizer(text=["a photo of a dog"], return_tensors="pt", padding="max_length", max_length=77)
outputs = model(**inputs)
text_embeddings = outputs.last_hidden_state
```
## Model Use
## Intended Use
The model is intended to serve as a reproducible research artifact for research communities in the light of models whose exact reproduction details are never released such as [CLIP](https://github.com/openai/CLIP) and [SimVLM](https://arxiv.org/abs/2108.10904). FLAVA model performs equivalently to these models on most tasks while being trained on less (70M pairs compared to CLIP's 400M and SimVLM's 1.8B pairs respectively) but public data. We hope that this model enable communities to better understand, and explore zero-shot and arbitrary image classification, multi-domain pretraining, modality-agnostic generic architectures while also providing a chance to develop on top of it.
## Primary Intended Uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of foundation models which work across domains which in this case are vision, language and combined multimodal vision-and-language domain.
## Out-of-Scope Use Cases
Similar to CLIP, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. Though FLAVA is trained on open and public data which doesn't contain a lot of harmful data, users should still employ proper safety measures.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
FLAVA was pretrained on public available 70M image and text pairs. This includes datasets such as COCO, Visual Genome, Localized Narratives, RedCaps, a custom filtered subset of YFCC100M, SBUCaptions, Conceptual Captions and Wikipedia Image-Text datasets. A larger portion of this dataset comes from internet and thus can have bias towards people most connected to internet such as those from developed countries and younger, male users.
## Data Mission Statement
Our goal with building this dataset called PMD (Public Multimodal Datasets) was two-fold (i) allow reproducibility of vision-language foundation models with publicly available data and (ii) test robustness and generalizability of FLAVA across the domains. The data was collected from already existing public dataset sources which have already been filtered out by the original dataset curators to not contain adult and excessively violent content. We will make the URLs of the images public for further research reproducibility.
## Performance and Limitations
## Performance
FLAVA has been evaluated on 35 different tasks from computer vision, natural language understanding, and vision-and-language reasoning.
On COCO and Flickr30k retrieval, we report zero-shot accuracy, on image tasks, we report linear-eval and on rest of the tasks, we report fine-tuned accuracies. Generally, FLAVA works much better than CLIP where tasks require good text understanding. The paper describes more in details but following are the 35 datasets:
### Natural Language Understanding
- MNLI
- CoLA
- MRPC
- QQP
- SST-2
- QNLI
- RTE
- STS-B
### Image Understanding
- ImageNet
- Food100
- CIFAR10
- CIFAR100
- Cars
- Aircraft
- DTD
- Pets
- Caltech101
- Flowers102
- MNIST
- STL10
- EuroSAT
- GTSRB
- KITTI
- PCAM
- UCF101
- CLEVR
- FER 2013
- SUN397
- Image SST
- Country 211
### Vision and Language Reasoning
- VQA v2
- SNLI-VE
- Hateful Memes
- Flickr30K Retrieval
- COCO Retrieval
## Limitations
Currently, FLAVA has many limitations. The image classification accuracy is not on par with CLIP on some of the tasks while text accuracy is not on par with BERT on some of the tasks suggesting possible room for improvement. FLAVA also doesn't work well on tasks containing scene text given the lack of scene text in most public datasets. Additionally, similar to CLIP, our approach to testing FLAVA also has an important limitation in the case of image tasks, where we use linear probes to evaluate FLAVA and there is evidence suggesting that linear probes can underestimate model performance.
## Feedback/Questions
Please email Amanpreet at `amanpreet [at] nyu [dot] edu` for questions.
|
mlabonne/gemma-2b-it-GGUF | mlabonne | 2024-02-22T17:50:24Z | 200 | 12 | transformers | [
"transformers",
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-21T13:50:10Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma-2B-it GGUF
This is a quantized version of the [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) model using [llama.cpp](https://github.com/ggerganov/llama.cpp).
This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B base model](https://huggingface.co/google/gemma-2b).
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
## ⚡ Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_s`: Uses Q3_K for all tensors
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q4_k_s`: Uses Q4_K for all tensors
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q5_k_s`: Uses Q5_K for all tensors
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
## 💻 Usage
This model can be used with the latest version of llama.cpp and LM Studio >0.2.16. |
malksama/KirikoYukoku | malksama | 2024-02-22T17:40:45Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:unknown",
"region:us"
] | text-to-image | 2024-02-22T17:36:51Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 1girl
parameters:
negative_prompt: easy
output:
url: images/_3b809f99-d6e1-4320-99bc-4d6ad4b76fcf.jfif
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: Kiriko Yukoku
license: unknown
---
# KirikoYukoku
<Gallery />
## Model description
girl
## Trigger words
You should use `Kiriko Yukoku` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/malksama/KirikoYukoku/tree/main) them in the Files & versions tab.
|
codeaze/deberta_small_22feb | codeaze | 2024-02-22T17:39:08Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-22T17:38:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TatersMcgee/TinyLlama-1.1B-Chat-v1.0-bf16-push-demo | TatersMcgee | 2024-02-22T17:21:07Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-20T21:34:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/CS505_COQE_viT5_Prompting10_ASPOL | ThuyNT03 | 2024-02-22T17:14:43Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-22T16:11:48Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting10_ASPOL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting10_ASPOL
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
Jennny/Nous-Finetuned | Jennny | 2024-02-22T17:12:57Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Nous-Hermes-Llama2-AWQ",
"base_model:adapter:TheBloke/Nous-Hermes-Llama2-AWQ",
"region:us"
] | null | 2024-02-22T17:10:27Z | ---
library_name: peft
base_model: TheBloke/Nous-Hermes-Llama2-AWQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
numen-tech/Nous-Hermes-2-Mistral-7B-DPO-w4a16g128asym | numen-tech | 2024-02-22T17:11:16Z | 0 | 0 | null | [
"arxiv:2308.13137",
"license:apache-2.0",
"region:us"
] | null | 2024-02-22T17:07:24Z | ---
license: apache-2.0
---
4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO).
|
Road2Nohand/Llama-2-7b-chat-hf-fine-tuned | Road2Nohand | 2024-02-22T17:09:02Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:45:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/gemma-7b-it-8.0bpw-h8-exl2 | LoneStriker | 2024-02-22T17:01:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T16:57:10Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
paulml/TW3_FR_7B_v1 | paulml | 2024-02-22T17:00:53Z | 9 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"axolotl",
"conversational",
"fr",
"en",
"dataset:tbboukhari/Alpaca_french_instruct",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T13:46:10Z | ---
license: cc-by-nc-4.0
datasets:
- tbboukhari/Alpaca_french_instruct
language:
- fr
- en
tags:
- axolotl
---
**TW3 French 8B v1**
This model is a finetuned version of https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO using the https://huggingface.co/datasets/tbboukhari/Alpaca_french_instruct dataset.
**Prompt Format**
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
**Inference Code**
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('paulml/TW3_FR_7B_v1', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"paulml/TW3_FR_7B_v1",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
Tu es un modèle d'IA, tu dois répondre aux requêtes avec les réponses les plus pertinentes.<|im_end|>
<|im_start|>user
Explique moi ce qu'est un LLM.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
``` |
LoneStriker/gemma-7b-it-6.0bpw-h6-exl2 | LoneStriker | 2024-02-22T16:57:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T16:53:48Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
leerina/my-pet-dog | leerina | 2024-02-22T16:55:39Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-22T16:51:08Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by leerina following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
LoneStriker/gemma-7b-it-4.0bpw-h6-exl2 | LoneStriker | 2024-02-22T16:50:49Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T16:48:11Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
CultriX/DominaTrix-7B-v2 | CultriX | 2024-02-22T16:50:20Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CultriX/MonaTrix-v4",
"bardsai/jaskier-7b-dpo-v5.6",
"eren23/ogno-monarch-jaskier-merge-7b",
"conversational",
"base_model:CultriX/MonaTrix-v4",
"base_model:merge:CultriX/MonaTrix-v4",
"base_model:bardsai/jaskier-7b-dpo-v5.6",
"base_model:merge:bardsai/jaskier-7b-dpo-v5.6",
"base_model:eren23/ogno-monarch-jaskier-merge-7b",
"base_model:merge:eren23/ogno-monarch-jaskier-merge-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:18:14Z | ---
tags:
- merge
- mergekit
- lazymergekit
- CultriX/MonaTrix-v4
- bardsai/jaskier-7b-dpo-v5.6
- eren23/ogno-monarch-jaskier-merge-7b
base_model:
- CultriX/MonaTrix-v4
- bardsai/jaskier-7b-dpo-v5.6
- eren23/ogno-monarch-jaskier-merge-7b
---
# DominaTrix-7B-v2
DominaTrix-7B-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4)
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
# No parameters necessary for base model
- model: CultriX/MonaTrix-v4
#Emphasize the beginning of Vicuna format models
parameters:
weight: 0.36
density: 0.65
- model: bardsai/jaskier-7b-dpo-v5.6
parameters:
weight: 0.34
density: 0.6
# Vicuna format
- model: eren23/ogno-monarch-jaskier-merge-7b
parameters:
weight: 0.3
density: 0.6
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/DominaTrix-7B-v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
LoneStriker/gemma-7b-it-3.0bpw-h6-exl2 | LoneStriker | 2024-02-22T16:48:10Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T16:45:55Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
ThuyNT03/CS505_COQE_viT5_Prompting9_ASPOL_vtest | ThuyNT03 | 2024-02-22T16:47:56Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/CS505_COQE_viT5_Prompting9_ASPOL",
"base_model:finetune:ThuyNT03/CS505_COQE_viT5_Prompting9_ASPOL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-22T16:24:32Z | ---
license: mit
base_model: ThuyNT03/CS505_COQE_viT5_Prompting9_ASPOL
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting9_ASPOL_vtest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting9_ASPOL_vtest
This model is a fine-tuned version of [ThuyNT03/CS505_COQE_viT5_Prompting9_ASPOL](https://huggingface.co/ThuyNT03/CS505_COQE_viT5_Prompting9_ASPOL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
SathvikRapelli/my-pet-dog | SathvikRapelli | 2024-02-22T16:41:07Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-22T16:30:51Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by SathvikRapelli following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
|
Schnatz65/bert-base-uncased-issues-128 | Schnatz65 | 2024-02-22T16:40:59Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-16T18:54:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.099 | 1.0 | 291 | 1.6869 |
| 1.6375 | 2.0 | 582 | 1.4308 |
| 1.4841 | 3.0 | 873 | 1.3859 |
| 1.397 | 4.0 | 1164 | 1.3731 |
| 1.3394 | 5.0 | 1455 | 1.1839 |
| 1.2819 | 6.0 | 1746 | 1.2912 |
| 1.2403 | 7.0 | 2037 | 1.2614 |
| 1.1983 | 8.0 | 2328 | 1.2071 |
| 1.1653 | 9.0 | 2619 | 1.1822 |
| 1.1407 | 10.0 | 2910 | 1.2134 |
| 1.1275 | 11.0 | 3201 | 1.2029 |
| 1.1064 | 12.0 | 3492 | 1.1685 |
| 1.0799 | 13.0 | 3783 | 1.2484 |
| 1.0776 | 14.0 | 4074 | 1.1658 |
| 1.0634 | 15.0 | 4365 | 1.1192 |
| 1.0607 | 16.0 | 4656 | 1.2484 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.10.3
|
JayR7/distilbert-base-cased | JayR7 | 2024-02-22T16:32:12Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-20T22:09:37Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: token-classification
--- |
VietTung04/open_llama_3b_v2_finetuned | VietTung04 | 2024-02-22T16:30:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-22T16:30:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhatminh/PatchTSTPretrain | nhatminh | 2024-02-22T16:30:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"patchtst",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-22T16:30:09Z | ---
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2376 | 1.0 | 557 | 0.1378 |
| 0.1626 | 2.0 | 1114 | 0.1266 |
| 0.1515 | 3.0 | 1671 | 0.1213 |
| 0.146 | 4.0 | 2228 | 0.1188 |
| 0.1425 | 5.0 | 2785 | 0.1166 |
| 0.14 | 6.0 | 3342 | 0.1161 |
| 0.138 | 7.0 | 3899 | 0.1144 |
| 0.1365 | 8.0 | 4456 | 0.1141 |
| 0.1351 | 9.0 | 5013 | 0.1138 |
| 0.134 | 10.0 | 5570 | 0.1137 |
| 0.1329 | 11.0 | 6127 | 0.1124 |
| 0.132 | 12.0 | 6684 | 0.1122 |
| 0.1312 | 13.0 | 7241 | 0.1118 |
| 0.1305 | 14.0 | 7798 | 0.1119 |
| 0.1299 | 15.0 | 8355 | 0.1118 |
| 0.1294 | 16.0 | 8912 | 0.1112 |
| 0.129 | 17.0 | 9469 | 0.1112 |
| 0.1285 | 18.0 | 10026 | 0.1116 |
| 0.1282 | 19.0 | 10583 | 0.1105 |
| 0.1276 | 20.0 | 11140 | 0.1103 |
| 0.1273 | 21.0 | 11697 | 0.1106 |
| 0.1269 | 22.0 | 12254 | 0.1103 |
| 0.1267 | 23.0 | 12811 | 0.1103 |
| 0.1263 | 24.0 | 13368 | 0.1098 |
| 0.126 | 25.0 | 13925 | 0.1098 |
| 0.1257 | 26.0 | 14482 | 0.1098 |
| 0.1253 | 27.0 | 15039 | 0.1101 |
| 0.125 | 28.0 | 15596 | 0.1104 |
| 0.1247 | 29.0 | 16153 | 0.1102 |
| 0.1245 | 30.0 | 16710 | 0.1093 |
| 0.1241 | 31.0 | 17267 | 0.1112 |
| 0.124 | 32.0 | 17824 | 0.1092 |
| 0.1237 | 33.0 | 18381 | 0.1097 |
| 0.1235 | 34.0 | 18938 | 0.1094 |
| 0.1233 | 35.0 | 19495 | 0.1097 |
| 0.1229 | 36.0 | 20052 | 0.1101 |
| 0.1227 | 37.0 | 20609 | 0.1107 |
| 0.1226 | 38.0 | 21166 | 0.1094 |
| 0.1224 | 39.0 | 21723 | 0.1094 |
| 0.1221 | 40.0 | 22280 | 0.1102 |
| 0.122 | 41.0 | 22837 | 0.1109 |
| 0.1218 | 42.0 | 23394 | 0.1101 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
charanhu/sql-gemma-2b | charanhu | 2024-02-22T16:29:36Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-22T16:25:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Justus-Jonas/Imaginary-Embeddings-SpeakerTokens-STP | Justus-Jonas | 2024-02-22T16:14:48Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"conversational",
"dataset:daily_dialog",
"arxiv:2211.07591",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T12:05:18Z | ---
license: cc-by-nc-sa-4.0
pipeline_tag: conversational
datasets:
- daily_dialog
---
⚠️ **This model is deprecated. Please don't use it as it produces embeddings of low quality.
We recommend using [triple-encoders](https://huggingface.co/UKPLab/triple-encoders-dailydialog) instead, also if you want to use them as a classic bi-encoder.**
Imaginary Embeddings utilize Curved Contrastive Learning (see paper [Imagination Is All You Need!](https://arxiv.org/pdf/2211.07591.pdf) (ACL 2023)) on [Sentence Transformers](https://sbert.net/) for long-short term dialogue planning and efficient abstract sequence modeling.
This model uses speaker tokens and was evaluated in the Short-Term planning experiments.
## Setup
```bash
python -m pip install imaginaryNLP
```
## Usage
```python
candidates = ['Want to eat something out ?',
'Want to go for a walk ?']
goal = ' I am hungry.'
stp.short_term_planning(candidates, goal)
``` |
AjayYeager/my-pet-dog | AjayYeager | 2024-02-22T16:12:03Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-22T16:07:34Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by AjayYeager following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
|
CaphAlderamin/Reinforce-1 | CaphAlderamin | 2024-02-22T16:11:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-22T16:11:10Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/gemma-7b-5.0bpw-h6-exl2 | LoneStriker | 2024-02-22T16:00:51Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:57:50Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
LoneStriker/gemma-7b-3.0bpw-h6-exl2 | LoneStriker | 2024-02-22T15:55:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:52:52Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
Schnatz65/distilbert-base-uncased-distilled-clinc | Schnatz65 | 2024-02-22T15:51:44Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-12T17:26:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9290322580645162
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0426
- Accuracy: 0.9290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.83 | 1.0 | 318 | 0.4315 | 0.6626 |
| 0.328 | 2.0 | 636 | 0.1565 | 0.8494 |
| 0.1544 | 3.0 | 954 | 0.0834 | 0.9016 |
| 0.1005 | 4.0 | 1272 | 0.0607 | 0.9197 |
| 0.0794 | 5.0 | 1590 | 0.0518 | 0.9248 |
| 0.0693 | 6.0 | 1908 | 0.0470 | 0.9271 |
| 0.0635 | 7.0 | 2226 | 0.0447 | 0.9277 |
| 0.0602 | 8.0 | 2544 | 0.0430 | 0.9306 |
| 0.0584 | 9.0 | 2862 | 0.0426 | 0.9290 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
villee/mistral01_streamofconsciousnessB_bat1lora8_gguf | villee | 2024-02-22T15:51:28Z | 11 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-21T23:16:58Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
Mind-wandering stream-of-consciousness type elaboration has been found useful in addressing wicked and ill-defined problems creatively, especially in the very first parts of the design process (problem redefinition). Mind-wandering benefits the creative process by highlighting multitude of ways to approach the design problem, without entering into concrete solutions. However, producing stream-of-consciousness type output is challenging for many people, especially in busy project life.
The purpose of this model is to use the hallucinative tendency of the LLMs as a benefit in the creative processes. The focus is in identifying potential design paradoxes (which, based on research, open doors for creative solutions). The model is fine-tuned to continue an input of "Here are my thoughts about the design paradox of [design problem here]" with a stream of consciousness -like text where chunks of freely-associating design paradox elaboration is folowed by quick jumps to next chunks. The result is detailed mind-wandering on the design context's design paradoxes.
Due to unstructured nature, the output of this model server only little purpose itself: therefore, the output can and should be systematically analyzed with more structured LLM's, such as OpenAI ChatGPT 4.0 (turbo). To identify design paradoxes and design directions, one can analyze the output, e.g., with this ChatGPT4.0 prompt: "From this text, go deep and create a list of unexpected design paradoxes that might stimulate creativity: Here are my thoughts about the design paradoxes of [design problem here]: [model output here]". After that, to systematically ideate on some identified paradox, one can use this ChatGPT prompt: "Connected to the problem of [design problem here], create unusual creative platform business ideas based on this design paradox (do not care if the idea is silly, if it is CREATIVE): [selected paradox from ChatGPT output]".
- **Developed by:** Ville Eloranta
- **Funded by [optional]:** n/a
- **Shared by [optional]:** n/a
- **Model type:** n/a
- **Language(s) (NLP):** n/a
- **License:** Apache 2.0
- **Finetuned from model [optional]:** Mistral-7b-v0.1 (non instruct model)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** villee/mistral01_streamofconsciousnessB_bat1lora8_gguf
- **Paper [optional]:** n/a
- **Demo [optional]:** n/a
## Uses
The model is fine-tuned to continue an input of "Here are my thoughts about the design paradox of [design problem here]" with a stream of consciousness -like text where chunks of freely-associating design paradox elaboration is folowed by quick jumps to next chunks. The result is detailed mind-wandering on the design context's design paradoxes.
Due to unstructured nature, the output of this model server only little purpose itself: therefore, the output can and should be systematically analyzed with more structured LLM's, such as OpenAI ChatGPT 4.0 (turbo). To identify design paradoxes and design directions, one can analyze the output, e.g., with this ChatGPT4.0 prompt: "From this text, go deep and create a list of unexpected design paradoxes that might stimulate creativity: Here are my thoughts about the design paradoxes of [design problem here]: [model output here]". After that, to systematically ideate on some identified paradox, one can use this ChatGPT prompt: "Connected to the problem of [design problem here], create unusual creative platform business ideas based on this design paradox (do not care if the idea is silly, if it is CREATIVE): [selected paradox from ChatGPT output]".
Note:
- Usage: Prompt "Here are my thoughts about the design paradox of [design problem here]"
- the model should be used with high temperature (e.g., 0.8-1.0) and long contexts (e.g., 4192)
- by design the model might produce repetitive content - please break the repetition if the content no longer progresses
- model output might have weird format; that is also by design
## Bias, Risks, and Limitations
There is no moderation in the model so use with own risk.
### Recommendations
Not for production usage.
## How to Get Started with the Model
- **Start ollama in one shell:** ollama serve
- **In another shell, download the model:** curl -L https://huggingface.co/villee/mistral01_streamofconsciousnessB_bat1lora8_gguf/resolve/main/streamofconsciousnessB_bat1lora8.gguf -o streamofconsciousnessB_bat1lora8.gguf
- **Create Modelfile:** FROM "streamofconsciousnessB_bat1lora8.gguf" PARAMETER temperature 1 PARAMETER num_ctx 4096
- **Create ollama instance:** ollama create streamofconsciousness -f Modelfile
- **Infer with ollama:** ollama run streamofconsciousness "Here are my thoughts about the design paradoxes of [design challenge]:"
- **e.g.** "Here are my thoughts about the design paradoxes of making the electricity markets more stable in a situation where the price of renewable power sources fluctuates wildly:"
- **Rerun after you get a nice long stream of consciousness.**
## Training Details
### Training Data
- the model is based on Mistral-7b-v0.1 (non instruct model)
- fine tune dataset is this: villee/streamofconsciousness (contains 200 rows of fine-tune data to enable Mistral to do stream-of-consciousness type output)
### Training Procedure
- fine-tune has been done through lora (batch 1, lora layers 8) with apple mlx
|
Tawkat/qlora-bm-ep1 | Tawkat | 2024-02-22T15:40:51Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:33:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alex62i2h/cumich | alex62i2h | 2024-02-22T15:40:23Z | 0 | 0 | null | [
"ru",
"license:unknown",
"region:us"
] | null | 2024-02-22T15:39:30Z | ---
license: unknown
language:
- ru
--- |
LoneStriker/gemma-2b-8.0bpw-h8-exl2 | LoneStriker | 2024-02-22T15:39:12Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:37:37Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
LoneStriker/gemma-2b-6.0bpw-h6-exl2 | LoneStriker | 2024-02-22T15:37:36Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:36:15Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
Samuela39/distilroberta-base-sanskrit-classic | Samuela39 | 2024-02-22T15:37:21Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-22T14:47:56Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-sanskrit-classic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-sanskrit-classic
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8514 | 1.0 | 2500 | 1.0913 |
| 0.7606 | 2.0 | 5000 | 1.0399 |
| 0.7233 | 3.0 | 7500 | 1.0179 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
luccidomingues/autotrain-8fohv-7gjpn | luccidomingues | 2024-02-22T15:35:12Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-8fohv-7gjpn/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T15:34:57Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-8fohv-7gjpn/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.64765864610672
f1: 0.6666666666666666
precision: 0.5
recall: 1.0
auc: 1.0
accuracy: 0.5
|
LoneStriker/gemma-2b-4.0bpw-h6-exl2 | LoneStriker | 2024-02-22T15:34:57Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:33:48Z | ---
library_name: transformers
tags: []
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
mridhulanatarajan/layoutlm-funsd | mridhulanatarajan | 2024-02-22T15:34:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-21T07:02:37Z | ---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9828
- Question: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}
- Overall Precision: 1.0
- Overall Recall: 1.0
- Overall F1: 1.0
- Overall Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.585 | 1.0 | 38 | 1.3020 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | 1.0 | 1.0 | 1.0 | 1.0 |
| 1.1814 | 2.0 | 76 | 1.1133 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | 1.0 | 1.0 | 1.0 | 1.0 |
| 1.0181 | 3.0 | 114 | 1.0476 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.9213 | 4.0 | 152 | 1.0004 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.8337 | 5.0 | 190 | 0.9828 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1} | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
LoneStriker/gemma-2b-it-8.0bpw-h8-exl2 | LoneStriker | 2024-02-22T15:32:20Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:30:42Z | ---
library_name: transformers
tags: []
widget:
- text: |
<start_of_turn>user
How does the brain work?<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [7B instruct model](https://huggingface.co/google/gemma-7b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
LoneStriker/gemma-2b-it-6.0bpw-h6-exl2 | LoneStriker | 2024-02-22T15:30:41Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:29:19Z | ---
library_name: transformers
tags: []
widget:
- text: |
<start_of_turn>user
How does the brain work?<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [7B instruct model](https://huggingface.co/google/gemma-7b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
CMLL/ZhongJing-2-0_5b2 | CMLL | 2024-02-22T15:28:29Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | 2024-02-22T15:27:44Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: Qwen/Qwen1.5-0.5B-Chat
model-index:
- name: train_2024-02-22-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2024-02-22-19
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the TCM and the oaast_sft_zh datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
LoneStriker/gemma-2b-it-4.0bpw-h6-exl2 | LoneStriker | 2024-02-22T15:27:59Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:26:48Z | ---
library_name: transformers
tags: []
widget:
- text: |
<start_of_turn>user
How does the brain work?<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [7B instruct model](https://huggingface.co/google/gemma-7b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
LoneStriker/gemma-2b-it-3.0bpw-h6-exl2 | LoneStriker | 2024-02-22T15:26:47Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T15:24:59Z | ---
library_name: transformers
tags: []
widget:
- text: |
<start_of_turn>user
How does the brain work?<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [7B instruct model](https://huggingface.co/google/gemma-7b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceeded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
LiukG/gut_1024-finetuned-lora-NT-v2-250m-ms | LiukG | 2024-02-22T15:21:21Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"esm",
"text-classification",
"generated_from_trainer",
"custom_code",
"base_model:InstaDeepAI/nucleotide-transformer-v2-250m-multi-species",
"base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-250m-multi-species",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T15:20:23Z | ---
license: cc-by-nc-sa-4.0
base_model: InstaDeepAI/nucleotide-transformer-v2-250m-multi-species
tags:
- generated_from_trainer
metrics:
- f1
- matthews_correlation
- accuracy
model-index:
- name: gut_1024b-finetuned-lora-v2-250m-multi-species
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gut_1024b-finetuned-lora-v2-250m-multi-species
This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-v2-250m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-250m-multi-species) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4815
- F1: 0.8414
- Matthews Correlation: 0.5610
- Accuracy: 0.7880
- F1 Score: 0.8414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Matthews Correlation | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------:|:--------:|:--------:|
| 0.682 | 0.02 | 100 | 0.5545 | 0.8132 | 0.4597 | 0.7369 | 0.8132 |
| 0.6379 | 0.04 | 200 | 0.6119 | 0.7498 | 0.4244 | 0.7154 | 0.7498 |
| 0.5973 | 0.05 | 300 | 0.5226 | 0.8221 | 0.5154 | 0.7707 | 0.8221 |
| 0.5451 | 0.07 | 400 | 0.5159 | 0.8244 | 0.5010 | 0.7521 | 0.8244 |
| 0.5538 | 0.09 | 500 | 0.5538 | 0.8102 | 0.5043 | 0.7648 | 0.8102 |
| 0.549 | 0.11 | 600 | 0.5220 | 0.8258 | 0.5188 | 0.7715 | 0.8258 |
| 0.4887 | 0.12 | 700 | 0.4940 | 0.8330 | 0.5317 | 0.7728 | 0.8330 |
| 0.4893 | 0.14 | 800 | 0.4951 | 0.8352 | 0.5519 | 0.7872 | 0.8352 |
| 0.4794 | 0.16 | 900 | 0.5008 | 0.8443 | 0.5687 | 0.7893 | 0.8443 |
| 0.5437 | 0.18 | 1000 | 0.4815 | 0.8414 | 0.5610 | 0.7880 | 0.8414 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
mcanoglu/deepseek-ai-deepseek-coder-1.3b-base-finetuned-defect-cwe-group-detection | mcanoglu | 2024-02-22T15:19:41Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:finetune:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T15:44:02Z | ---
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: deepseek-ai-deepseek-coder-1.3b-base-finetuned-defect-cwe-group-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-ai-deepseek-coder-1.3b-base-finetuned-defect-cwe-group-detection
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6902
- Accuracy: 0.7715
- Precision: 0.8036
- Recall: 0.5867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4711
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| No log | 1.0 | 462 | 0.4904 | 0.7800 | 0.6028 | 0.5178 |
| 0.5739 | 2.0 | 925 | 0.4917 | 0.7985 | 0.8159 | 0.5552 |
| 0.3111 | 3.0 | 1387 | 0.6582 | 0.7918 | 0.7907 | 0.5901 |
| 0.2395 | 4.0 | 1850 | 0.6238 | 0.7800 | 0.8018 | 0.6132 |
| 0.2047 | 4.99 | 2310 | 0.6902 | 0.7715 | 0.8036 | 0.5867 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ThuyNT03/CS505_COQE_viT5_Prompting7_ASPOL | ThuyNT03 | 2024-02-22T15:19:08Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-22T14:17:49Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting7_ASPOL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting7_ASPOL
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
Kishan/ppo-LunarLander-v2 | Kishan | 2024-02-22T15:18:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-22T15:18:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.87 +/- 21.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MaggieZhang/myclassification | MaggieZhang | 2024-02-22T15:16:27Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-22T11:38:49Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: myclassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1432
- Accuracy: 0.9388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6881 | 1.0 | 625 | 0.5453 | 0.7528 |
| 0.5585 | 2.0 | 1250 | 0.4954 | 0.7574 |
| 0.5185 | 3.0 | 1875 | 0.4485 | 0.8018 |
| 0.4635 | 4.0 | 2500 | 0.4274 | 0.8236 |
| 0.4556 | 5.0 | 3125 | 0.4262 | 0.8264 |
| 0.431 | 6.0 | 3750 | 0.4520 | 0.8258 |
| 0.4422 | 7.0 | 4375 | 0.4324 | 0.829 |
| 0.4276 | 8.0 | 5000 | 0.3828 | 0.8342 |
| 0.4137 | 9.0 | 5625 | 0.4053 | 0.8306 |
| 0.4282 | 10.0 | 6250 | 0.3915 | 0.834 |
| 0.4131 | 11.0 | 6875 | 0.4001 | 0.8342 |
| 0.403 | 12.0 | 7500 | 0.3894 | 0.834 |
| 0.4098 | 13.0 | 8125 | 0.3739 | 0.8352 |
| 0.3976 | 14.0 | 8750 | 0.3936 | 0.8298 |
| 0.4015 | 15.0 | 9375 | 0.3794 | 0.836 |
| 0.3979 | 16.0 | 10000 | 0.3737 | 0.841 |
| 0.3894 | 17.0 | 10625 | 0.3610 | 0.8364 |
| 0.3884 | 18.0 | 11250 | 0.3530 | 0.8312 |
| 0.3852 | 19.0 | 11875 | 0.3564 | 0.8348 |
| 0.3806 | 20.0 | 12500 | 0.3507 | 0.842 |
| 0.3803 | 21.0 | 13125 | 0.3439 | 0.8392 |
| 0.3757 | 22.0 | 13750 | 0.3391 | 0.8386 |
| 0.37 | 23.0 | 14375 | 0.3244 | 0.8428 |
| 0.3781 | 24.0 | 15000 | 0.3200 | 0.8442 |
| 0.3662 | 25.0 | 15625 | 0.3418 | 0.8458 |
| 0.3515 | 26.0 | 16250 | 0.3043 | 0.8522 |
| 0.3615 | 27.0 | 16875 | 0.2973 | 0.8606 |
| 0.3532 | 28.0 | 17500 | 0.3105 | 0.8558 |
| 0.3498 | 29.0 | 18125 | 0.2971 | 0.8664 |
| 0.3564 | 30.0 | 18750 | 0.3051 | 0.8684 |
| 0.3469 | 31.0 | 19375 | 0.3050 | 0.8688 |
| 0.349 | 32.0 | 20000 | 0.2813 | 0.864 |
| 0.3294 | 33.0 | 20625 | 0.2898 | 0.8716 |
| 0.3371 | 34.0 | 21250 | 0.2921 | 0.8728 |
| 0.3254 | 35.0 | 21875 | 0.2812 | 0.8744 |
| 0.3382 | 36.0 | 22500 | 0.2816 | 0.8622 |
| 0.3402 | 37.0 | 23125 | 0.2905 | 0.873 |
| 0.3333 | 38.0 | 23750 | 0.2832 | 0.863 |
| 0.3084 | 39.0 | 24375 | 0.3017 | 0.8734 |
| 0.3421 | 40.0 | 25000 | 0.2876 | 0.8718 |
| 0.3113 | 41.0 | 25625 | 0.2759 | 0.8642 |
| 0.3223 | 42.0 | 26250 | 0.2814 | 0.8746 |
| 0.3154 | 43.0 | 26875 | 0.2691 | 0.8684 |
| 0.3185 | 44.0 | 27500 | 0.2780 | 0.8726 |
| 0.3074 | 45.0 | 28125 | 0.2596 | 0.88 |
| 0.3037 | 46.0 | 28750 | 0.2645 | 0.8822 |
| 0.3035 | 47.0 | 29375 | 0.2498 | 0.8848 |
| 0.3144 | 48.0 | 30000 | 0.2552 | 0.8742 |
| 0.3057 | 49.0 | 30625 | 0.2453 | 0.8876 |
| 0.2972 | 50.0 | 31250 | 0.2412 | 0.891 |
| 0.2962 | 51.0 | 31875 | 0.2394 | 0.8938 |
| 0.2931 | 52.0 | 32500 | 0.2502 | 0.8948 |
| 0.2908 | 53.0 | 33125 | 0.2398 | 0.8972 |
| 0.288 | 54.0 | 33750 | 0.2314 | 0.8972 |
| 0.2872 | 55.0 | 34375 | 0.2221 | 0.9016 |
| 0.2885 | 56.0 | 35000 | 0.2404 | 0.8932 |
| 0.2828 | 57.0 | 35625 | 0.2145 | 0.9046 |
| 0.2786 | 58.0 | 36250 | 0.2171 | 0.9038 |
| 0.267 | 59.0 | 36875 | 0.2191 | 0.9062 |
| 0.2689 | 60.0 | 37500 | 0.2012 | 0.9084 |
| 0.2716 | 61.0 | 38125 | 0.2061 | 0.9096 |
| 0.2707 | 62.0 | 38750 | 0.2156 | 0.912 |
| 0.275 | 63.0 | 39375 | 0.1997 | 0.911 |
| 0.2355 | 64.0 | 40000 | 0.1991 | 0.9128 |
| 0.2692 | 65.0 | 40625 | 0.1910 | 0.914 |
| 0.2591 | 66.0 | 41250 | 0.1833 | 0.9166 |
| 0.2694 | 67.0 | 41875 | 0.1838 | 0.9228 |
| 0.2762 | 68.0 | 42500 | 0.1776 | 0.9244 |
| 0.2596 | 69.0 | 43125 | 0.1820 | 0.924 |
| 0.2624 | 70.0 | 43750 | 0.1893 | 0.9218 |
| 0.2442 | 71.0 | 44375 | 0.1764 | 0.9234 |
| 0.2601 | 72.0 | 45000 | 0.1652 | 0.9292 |
| 0.2614 | 73.0 | 45625 | 0.1701 | 0.9232 |
| 0.2579 | 74.0 | 46250 | 0.1627 | 0.9308 |
| 0.2562 | 75.0 | 46875 | 0.1616 | 0.9306 |
| 0.244 | 76.0 | 47500 | 0.1630 | 0.9312 |
| 0.2368 | 77.0 | 48125 | 0.1616 | 0.9298 |
| 0.2619 | 78.0 | 48750 | 0.1658 | 0.93 |
| 0.2249 | 79.0 | 49375 | 0.1596 | 0.9316 |
| 0.254 | 80.0 | 50000 | 0.1525 | 0.9334 |
| 0.2467 | 81.0 | 50625 | 0.1596 | 0.9336 |
| 0.2311 | 82.0 | 51250 | 0.1577 | 0.932 |
| 0.2422 | 83.0 | 51875 | 0.1502 | 0.9346 |
| 0.2224 | 84.0 | 52500 | 0.1500 | 0.9358 |
| 0.2377 | 85.0 | 53125 | 0.1499 | 0.937 |
| 0.2442 | 86.0 | 53750 | 0.1498 | 0.9364 |
| 0.2285 | 87.0 | 54375 | 0.1506 | 0.9354 |
| 0.2361 | 88.0 | 55000 | 0.1479 | 0.9362 |
| 0.2416 | 89.0 | 55625 | 0.1461 | 0.9372 |
| 0.2315 | 90.0 | 56250 | 0.1462 | 0.9362 |
| 0.2282 | 91.0 | 56875 | 0.1471 | 0.9348 |
| 0.2293 | 92.0 | 57500 | 0.1479 | 0.9348 |
| 0.2246 | 93.0 | 58125 | 0.1484 | 0.9376 |
| 0.2568 | 94.0 | 58750 | 0.1434 | 0.9384 |
| 0.2356 | 95.0 | 59375 | 0.1454 | 0.9374 |
| 0.2357 | 96.0 | 60000 | 0.1432 | 0.9378 |
| 0.2301 | 97.0 | 60625 | 0.1421 | 0.9386 |
| 0.2321 | 98.0 | 61250 | 0.1425 | 0.9386 |
| 0.241 | 99.0 | 61875 | 0.1427 | 0.9392 |
| 0.2283 | 100.0 | 62500 | 0.1432 | 0.9388 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
blzncz/segformer-finetuned-4ss1st3r_s3gs3m_24Jan-10k-steps | blzncz | 2024-02-22T15:08:16Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-01-17T09:17:16Z | ---
license: other
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-4ss1st3r_s3gs3m_24Jan-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-4ss1st3r_s3gs3m_24Jan-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the blzncz/4ss1st3r_s3gs3m_24Jan dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1305
- Mean Iou: 0.6564
- Mean Accuracy: 0.8562
- Overall Accuracy: 0.9780
- Accuracy Bg: nan
- Accuracy Fallo cohesivo: 0.9896
- Accuracy Fallo malla: 0.9270
- Accuracy Fallo adhesivo: 0.9478
- Accuracy Fallo burbuja: 0.5603
- Iou Bg: 0.0
- Iou Fallo cohesivo: 0.9749
- Iou Fallo malla: 0.8458
- Iou Fallo adhesivo: 0.9324
- Iou Fallo burbuja: 0.5290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bg | Accuracy Fallo cohesivo | Accuracy Fallo malla | Accuracy Fallo adhesivo | Accuracy Fallo burbuja | Iou Bg | Iou Fallo cohesivo | Iou Fallo malla | Iou Fallo adhesivo | Iou Fallo burbuja |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:-----------------------:|:--------------------:|:-----------------------:|:----------------------:|:------:|:------------------:|:---------------:|:------------------:|:-----------------:|
| 0.3639 | 1.0 | 193 | 0.1583 | 0.6076 | 0.8441 | 0.9607 | nan | 0.9660 | 0.9617 | 0.9644 | 0.4844 | 0.0 | 0.9553 | 0.7294 | 0.9301 | 0.4231 |
| 0.1148 | 2.0 | 386 | 0.0991 | 0.6189 | 0.8025 | 0.9754 | nan | 0.9912 | 0.9045 | 0.9417 | 0.3725 | 0.0 | 0.9723 | 0.8404 | 0.9283 | 0.3534 |
| 0.0937 | 3.0 | 579 | 0.1414 | 0.5848 | 0.8155 | 0.9554 | nan | 0.9606 | 0.9630 | 0.9707 | 0.3675 | 0.0 | 0.9487 | 0.6791 | 0.9442 | 0.3519 |
| 0.0827 | 4.0 | 772 | 0.1028 | 0.6390 | 0.8484 | 0.9747 | nan | 0.9831 | 0.9530 | 0.9640 | 0.4936 | 0.0 | 0.9714 | 0.8231 | 0.9388 | 0.4617 |
| 0.0735 | 5.0 | 965 | 0.0948 | 0.6425 | 0.8423 | 0.9777 | nan | 0.9875 | 0.9487 | 0.9594 | 0.4737 | 0.0 | 0.9745 | 0.8484 | 0.9415 | 0.4479 |
| 0.0716 | 6.0 | 1158 | 0.0968 | 0.6638 | 0.8622 | 0.9804 | nan | 0.9936 | 0.8987 | 0.9579 | 0.5985 | 0.0 | 0.9777 | 0.8654 | 0.9403 | 0.5355 |
| 0.0692 | 7.0 | 1351 | 0.1123 | 0.6389 | 0.8535 | 0.9718 | nan | 0.9804 | 0.9425 | 0.9604 | 0.5307 | 0.0 | 0.9678 | 0.7878 | 0.9403 | 0.4984 |
| 0.0718 | 8.0 | 1544 | 0.1097 | 0.6424 | 0.8668 | 0.9703 | nan | 0.9770 | 0.9520 | 0.9642 | 0.5738 | 0.0 | 0.9663 | 0.7792 | 0.9423 | 0.5243 |
| 0.0613 | 9.0 | 1737 | 0.1212 | 0.6341 | 0.8625 | 0.9669 | nan | 0.9735 | 0.9412 | 0.9721 | 0.5634 | 0.0 | 0.9621 | 0.7447 | 0.9430 | 0.5208 |
| 0.06 | 10.0 | 1930 | 0.0983 | 0.6724 | 0.8945 | 0.9793 | nan | 0.9875 | 0.9335 | 0.9682 | 0.6889 | 0.0 | 0.9765 | 0.8490 | 0.9461 | 0.5905 |
| 0.0593 | 11.0 | 2123 | 0.1104 | 0.6577 | 0.8803 | 0.9743 | nan | 0.9830 | 0.9249 | 0.9670 | 0.6462 | 0.0 | 0.9709 | 0.8028 | 0.9419 | 0.5729 |
| 0.056 | 12.0 | 2316 | 0.1029 | 0.6589 | 0.8829 | 0.9755 | nan | 0.9833 | 0.9349 | 0.9712 | 0.6420 | 0.0 | 0.9721 | 0.8170 | 0.9399 | 0.5655 |
| 0.0547 | 13.0 | 2509 | 0.1037 | 0.6613 | 0.8944 | 0.9746 | nan | 0.9815 | 0.9406 | 0.9680 | 0.6877 | 0.0 | 0.9712 | 0.8089 | 0.9434 | 0.5832 |
| 0.0538 | 14.0 | 2702 | 0.1342 | 0.6338 | 0.8750 | 0.9625 | nan | 0.9677 | 0.9470 | 0.9647 | 0.6204 | 0.0 | 0.9570 | 0.7080 | 0.9412 | 0.5627 |
| 0.052 | 15.0 | 2895 | 0.0961 | 0.6525 | 0.8507 | 0.9787 | nan | 0.9894 | 0.9292 | 0.9656 | 0.5187 | 0.0 | 0.9758 | 0.8514 | 0.9439 | 0.4915 |
| 0.0489 | 16.0 | 3088 | 0.1093 | 0.6464 | 0.8626 | 0.9725 | nan | 0.9812 | 0.9345 | 0.9639 | 0.5708 | 0.0 | 0.9688 | 0.7900 | 0.9440 | 0.5290 |
| 0.0478 | 17.0 | 3281 | 0.1053 | 0.6503 | 0.8574 | 0.9760 | nan | 0.9858 | 0.9300 | 0.9673 | 0.5465 | 0.0 | 0.9726 | 0.8239 | 0.9411 | 0.5138 |
| 0.048 | 18.0 | 3474 | 0.1314 | 0.6416 | 0.8884 | 0.9644 | nan | 0.9691 | 0.9517 | 0.9642 | 0.6688 | 0.0 | 0.9591 | 0.7232 | 0.9415 | 0.5842 |
| 0.0474 | 19.0 | 3667 | 0.1197 | 0.6473 | 0.8559 | 0.9743 | nan | 0.9842 | 0.9344 | 0.9557 | 0.5493 | 0.0 | 0.9707 | 0.8067 | 0.9394 | 0.5196 |
| 0.0456 | 20.0 | 3860 | 0.1149 | 0.6587 | 0.8578 | 0.9788 | nan | 0.9905 | 0.9241 | 0.9503 | 0.5665 | 0.0 | 0.9759 | 0.8513 | 0.9344 | 0.5321 |
| 0.044 | 21.0 | 4053 | 0.1183 | 0.6574 | 0.8612 | 0.9774 | nan | 0.9885 | 0.9280 | 0.9487 | 0.5794 | 0.0 | 0.9743 | 0.8367 | 0.9345 | 0.5413 |
| 0.0431 | 22.0 | 4246 | 0.1326 | 0.6425 | 0.8599 | 0.9711 | nan | 0.9795 | 0.9405 | 0.9595 | 0.5601 | 0.0 | 0.9670 | 0.7783 | 0.9384 | 0.5291 |
| 0.0446 | 23.0 | 4439 | 0.1253 | 0.6535 | 0.8678 | 0.9743 | nan | 0.9833 | 0.9309 | 0.9635 | 0.5933 | 0.0 | 0.9706 | 0.8007 | 0.9427 | 0.5535 |
| 0.0427 | 24.0 | 4632 | 0.1075 | 0.6568 | 0.8602 | 0.9771 | nan | 0.9882 | 0.9229 | 0.9543 | 0.5755 | 0.0 | 0.9739 | 0.8342 | 0.9379 | 0.5379 |
| 0.0417 | 25.0 | 4825 | 0.1250 | 0.6443 | 0.8559 | 0.9723 | nan | 0.9820 | 0.9337 | 0.9542 | 0.5539 | 0.0 | 0.9684 | 0.7904 | 0.9375 | 0.5250 |
| 0.0402 | 26.0 | 5018 | 0.1206 | 0.6518 | 0.8497 | 0.9775 | nan | 0.9892 | 0.9236 | 0.9536 | 0.5324 | 0.0 | 0.9744 | 0.8373 | 0.9383 | 0.5089 |
| 0.0403 | 27.0 | 5211 | 0.1164 | 0.6565 | 0.8688 | 0.9755 | nan | 0.9848 | 0.9382 | 0.9531 | 0.5991 | 0.0 | 0.9723 | 0.8183 | 0.9378 | 0.5540 |
| 0.0405 | 28.0 | 5404 | 0.1091 | 0.6586 | 0.8505 | 0.9799 | nan | 0.9926 | 0.9177 | 0.9530 | 0.5389 | 0.0 | 0.9773 | 0.8650 | 0.9381 | 0.5128 |
| 0.0384 | 29.0 | 5597 | 0.1304 | 0.6504 | 0.8470 | 0.9781 | nan | 0.9893 | 0.9365 | 0.9508 | 0.5112 | 0.0 | 0.9751 | 0.8477 | 0.9365 | 0.4926 |
| 0.0374 | 30.0 | 5790 | 0.1095 | 0.6585 | 0.8605 | 0.9783 | nan | 0.9891 | 0.9323 | 0.9507 | 0.5698 | 0.0 | 0.9754 | 0.8469 | 0.9358 | 0.5345 |
| 0.0378 | 31.0 | 5983 | 0.1245 | 0.6558 | 0.8553 | 0.9780 | nan | 0.9896 | 0.9237 | 0.9539 | 0.5540 | 0.0 | 0.9750 | 0.8435 | 0.9353 | 0.5254 |
| 0.0367 | 32.0 | 6176 | 0.1288 | 0.6504 | 0.8637 | 0.9737 | nan | 0.9828 | 0.9386 | 0.9555 | 0.5778 | 0.0 | 0.9700 | 0.8016 | 0.9362 | 0.5443 |
| 0.037 | 33.0 | 6369 | 0.1293 | 0.6565 | 0.8656 | 0.9760 | nan | 0.9862 | 0.9381 | 0.9443 | 0.5938 | 0.0 | 0.9726 | 0.8273 | 0.9314 | 0.5512 |
| 0.0363 | 34.0 | 6562 | 0.1242 | 0.6594 | 0.8528 | 0.9800 | nan | 0.9926 | 0.9171 | 0.9529 | 0.5485 | 0.0 | 0.9773 | 0.8632 | 0.9378 | 0.5188 |
| 0.0361 | 35.0 | 6755 | 0.1239 | 0.6653 | 0.8739 | 0.9781 | nan | 0.9886 | 0.9247 | 0.9557 | 0.6264 | 0.0 | 0.9752 | 0.8420 | 0.9374 | 0.5718 |
| 0.0371 | 36.0 | 6948 | 0.1220 | 0.6626 | 0.8691 | 0.9782 | nan | 0.9887 | 0.9297 | 0.9530 | 0.6049 | 0.0 | 0.9751 | 0.8418 | 0.9375 | 0.5585 |
| 0.034 | 37.0 | 7141 | 0.1694 | 0.6300 | 0.8685 | 0.9609 | nan | 0.9666 | 0.9453 | 0.9602 | 0.6020 | 0.0 | 0.9551 | 0.6981 | 0.9399 | 0.5567 |
| 0.0358 | 38.0 | 7334 | 0.1251 | 0.6513 | 0.8534 | 0.9764 | nan | 0.9878 | 0.9270 | 0.9492 | 0.5497 | 0.0 | 0.9731 | 0.8290 | 0.9345 | 0.5198 |
| 0.033 | 39.0 | 7527 | 0.1330 | 0.6542 | 0.8604 | 0.9764 | nan | 0.9868 | 0.9343 | 0.9503 | 0.5700 | 0.0 | 0.9731 | 0.8292 | 0.9351 | 0.5336 |
| 0.0327 | 40.0 | 7720 | 0.1359 | 0.6490 | 0.8537 | 0.9750 | nan | 0.9862 | 0.9269 | 0.9483 | 0.5535 | 0.0 | 0.9716 | 0.8183 | 0.9330 | 0.5221 |
| 0.0336 | 41.0 | 7913 | 0.1277 | 0.6588 | 0.8667 | 0.9766 | nan | 0.9874 | 0.9267 | 0.9489 | 0.6037 | 0.0 | 0.9734 | 0.8288 | 0.9341 | 0.5577 |
| 0.0312 | 42.0 | 8106 | 0.1321 | 0.6568 | 0.8716 | 0.9749 | nan | 0.9844 | 0.9358 | 0.9500 | 0.6163 | 0.0 | 0.9714 | 0.8132 | 0.9344 | 0.5650 |
| 0.0321 | 43.0 | 8299 | 0.1269 | 0.6533 | 0.8574 | 0.9763 | nan | 0.9874 | 0.9283 | 0.9490 | 0.5649 | 0.0 | 0.9730 | 0.8285 | 0.9335 | 0.5316 |
| 0.0306 | 44.0 | 8492 | 0.1269 | 0.6583 | 0.8528 | 0.9792 | nan | 0.9918 | 0.9207 | 0.9467 | 0.5520 | 0.0 | 0.9764 | 0.8593 | 0.9324 | 0.5236 |
| 0.0306 | 45.0 | 8685 | 0.1335 | 0.6503 | 0.8503 | 0.9765 | nan | 0.9883 | 0.9283 | 0.9439 | 0.5407 | 0.0 | 0.9733 | 0.8345 | 0.9295 | 0.5144 |
| 0.0324 | 46.0 | 8878 | 0.1294 | 0.6538 | 0.8490 | 0.9784 | nan | 0.9908 | 0.9254 | 0.9441 | 0.5358 | 0.0 | 0.9754 | 0.8525 | 0.9303 | 0.5107 |
| 0.0318 | 47.0 | 9071 | 0.1230 | 0.6564 | 0.8549 | 0.9782 | nan | 0.9900 | 0.9252 | 0.9486 | 0.5559 | 0.0 | 0.9752 | 0.8477 | 0.9335 | 0.5255 |
| 0.0319 | 48.0 | 9264 | 0.1267 | 0.6524 | 0.8501 | 0.9776 | nan | 0.9895 | 0.9278 | 0.9464 | 0.5368 | 0.0 | 0.9745 | 0.8438 | 0.9322 | 0.5117 |
| 0.0312 | 49.0 | 9457 | 0.1258 | 0.6568 | 0.8602 | 0.9774 | nan | 0.9884 | 0.9321 | 0.9482 | 0.5720 | 0.0 | 0.9743 | 0.8399 | 0.9327 | 0.5373 |
| 0.0311 | 50.0 | 9650 | 0.1203 | 0.6589 | 0.8610 | 0.9779 | nan | 0.9894 | 0.9262 | 0.9471 | 0.5814 | 0.0 | 0.9749 | 0.8444 | 0.9319 | 0.5435 |
| 0.0327 | 51.0 | 9843 | 0.1219 | 0.6575 | 0.8577 | 0.9780 | nan | 0.9897 | 0.9265 | 0.9457 | 0.5688 | 0.0 | 0.9750 | 0.8462 | 0.9314 | 0.5348 |
| 0.031 | 51.81 | 10000 | 0.1305 | 0.6564 | 0.8562 | 0.9780 | nan | 0.9896 | 0.9270 | 0.9478 | 0.5603 | 0.0 | 0.9749 | 0.8458 | 0.9324 | 0.5290 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kishore2/zephyr-7B-alpha-tags-86-FT-TESTING | kishore2 | 2024-02-22T15:07:22Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2024-02-22T15:07:18Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-7B-alpha-tags-86-FT-TESTING
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7B-alpha-tags-86-FT-TESTING
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
lvcalucioli/llamantino7b_2_multiple-choice | lvcalucioli | 2024-02-22T15:01:23Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:swap-uniba/LLaMAntino-2-7b-hf-ITA",
"base_model:adapter:swap-uniba/LLaMAntino-2-7b-hf-ITA",
"license:llama2",
"region:us"
] | null | 2024-02-18T17:30:01Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: swap-uniba/LLaMAntino-2-7b-hf-ITA
model-index:
- name: llamantino7b_2_multiple-choice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llamantino7b_2_multiple-choice
This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 12
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2 |
eastjin/tinyllama-sft-ko-qlora_v2 | eastjin | 2024-02-22T14:58:46Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:kyujinpy/KOR-OpenOrca-Platypus",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-22T08:56:52Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- kyujinpy/KOR-OpenOrca-Platypus
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model-index:
- name: tinyllama-sft-ko-qlora_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-sft-ko-qlora_v2
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the kyujinpy/KOR-OpenOrca-Platypus dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9945 | 1.0 | 1288 | 1.9996 |
| 2.0215 | 2.0 | 2577 | 1.9851 |
| 1.9766 | 3.0 | 3864 | 1.9850 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
ibunescu/phi-2_GDPR_4 | ibunescu | 2024-02-22T14:58:11Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T14:55:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gizmo-ai/flan-t5-small | gizmo-ai | 2024-02-22T14:49:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-22T14:49:06Z | ---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 small
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Small, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
LarryAIDraw/convSD15Checkpoint_v021 | LarryAIDraw | 2024-02-22T14:48:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-20T05:00:07Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/311179/conv-sd15-checkpoint |
LarryAIDraw/yume_ba | LarryAIDraw | 2024-02-22T14:47:15Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-22T14:36:30Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/314786/yume-blue-archive-or-goofy-ai |
LarryAIDraw/yashajin_ai | LarryAIDraw | 2024-02-22T14:46:30Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-22T14:34:13Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/315671/yashajin-ai-the-ryuos-work-is-never-done |
LarryAIDraw/HighSchoolDxD_HimejimaAkeno | LarryAIDraw | 2024-02-22T14:45:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-22T14:32:13Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/315695/himejima-akeno-hero-ver-or-highschool-dxd |
LarryAIDraw/Char-HonkaiSR-RuanMei-V2 | LarryAIDraw | 2024-02-22T14:45:02Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-22T14:31:53Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/252328/ruan-mei-or-honkai-star-rail |
LarryAIDraw/Ogiso_Setsuna | LarryAIDraw | 2024-02-22T14:44:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-22T14:31:12Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/316566/ogiso-setsuna-white-album-2 |
LarryAIDraw/Touma_Kazusa | LarryAIDraw | 2024-02-22T14:44:32Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-22T14:30:46Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/316591/touma-kazusa-white-album-2 |
OmarHaroon01/t5-samsum | OmarHaroon01 | 2024-02-22T14:44:04Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-22T14:43:54Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7097
- Rouge1: 43.1274
- Rouge2: 19.364
- Rougel: 35.6435
- Rougelsum: 39.6113
- Gen Len: 16.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.01 | 1.0 | 1842 | 1.7905 | 40.9077 | 17.5516 | 33.9527 | 37.531 | 16.6960 |
| 1.8931 | 2.0 | 3684 | 1.7445 | 42.0004 | 18.4562 | 34.676 | 38.4273 | 16.8816 |
| 1.8391 | 3.0 | 5526 | 1.7248 | 42.6688 | 18.9855 | 35.2402 | 39.0387 | 16.7326 |
| 1.8104 | 4.0 | 7368 | 1.7121 | 42.9504 | 19.3162 | 35.6305 | 39.543 | 16.9829 |
| 1.7834 | 5.0 | 9210 | 1.7097 | 43.1274 | 19.364 | 35.6435 | 39.6113 | 16.8840 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
AumBarai/q-Taxi-v3 | AumBarai | 2024-02-22T14:43:08Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-22T14:43:05Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AumBarai/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wooseok0303/xlm-roberta-base-finetuned-panx-ko-fr | wooseok0303 | 2024-02-22T14:41:13Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-22T14:18:14Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-ko-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ko-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1902
- F1: 0.8548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3613 | 1.0 | 715 | 0.2528 | 0.8021 |
| 0.1905 | 2.0 | 1430 | 0.1921 | 0.8420 |
| 0.1237 | 3.0 | 2145 | 0.1902 | 0.8548 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
yam-peleg/Experiment22-7B | yam-peleg | 2024-02-22T14:40:51Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T13:48:15Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment22-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
AumBarai/q-FrozenLake-v1-4x4-noSlippery | AumBarai | 2024-02-22T14:35:43Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-22T14:35:40Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AumBarai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RonanMcGovern/deepseek-coder-1.3b-base-chat-function-calling-v3-adapters-local | RonanMcGovern | 2024-02-22T14:34:52Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T14:34:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Arczisan/feet-helper | Arczisan | 2024-02-22T14:32:21Z | 12 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | 2024-02-22T14:32:00Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0<\0l\0o\0r\0a\0:\0T\0o\0e\0_\0R\0i\0n\0g\0-\0D\0E\0F\0:\00\0.\07\0>\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0s\0i\0t\0t\0i\0n\0g\0 \0o\0n\0 \0t\0h\0e\0 \0f\0l\0o\0o\0r\0,\0 \0l\0e\0g\0s\0 \0t\0o\0g\0e\0t\0h\0e\0r\0,\0 \0i\0n\0d\0o\0o\0r\0s\0,\0 \0b\0a\0r\0e\0f\0o\0o\0t\0,\0 \0t\0o\0e\0 \0r\0i\0n\0g\0,\0 \0j\0e\0w\0e\0l\0r\0y\0"
output:
url: images/00603-4185133911.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Feet Focus Helper
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/feet-helper/tree/main) them in the Files & versions tab.
|
wooseok0303/xlm-roberta-base-finetuned-panx-de | wooseok0303 | 2024-02-22T14:30:58Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-25T13:41:04Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ko
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1698
- F1: 0.8562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.352 | 1.0 | 525 | 0.2044 | 0.8064 |
| 0.1817 | 2.0 | 1050 | 0.1782 | 0.8353 |
| 0.1183 | 3.0 | 1575 | 0.1698 | 0.8562 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
merve/gemma-7b-8bit | merve | 2024-02-22T14:28:54Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gemma",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-22T13:47:08Z | ---
license: other
---
# Gemma-7B in 8-bit with bitsandbytes
This is the repository for Gemma-7B quantized to 8-bit using bitsandbytes.
Original model card and license for Gemma-7B can be found [here](https://huggingface.co/google/gemma-7b#gemma-model-card).
This is the base model and it's not instruction fine-tuned.
## Usage
Please visit original Gemma-7B [model card](https://huggingface.co/google/gemma-7b#usage-and-limitations) for intended uses and limitations.
You can use this model like following:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained(
"merve/gemma-7b-8bit",
device_map='auto'
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
``` |
gcicceri/organoids-prova_organoid | gcicceri | 2024-02-22T14:20:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-08T10:17:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: organoids-prova_organoid
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8576287657920311
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# organoids-prova_organoid
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3433
- Accuracy: 0.8576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2121 | 0.99 | 36 | 1.3066 | 0.4116 |
| 0.8905 | 1.99 | 72 | 0.9344 | 0.6749 |
| 0.6942 | 2.98 | 108 | 0.6875 | 0.7507 |
| 0.6087 | 4.0 | 145 | 0.5493 | 0.7896 |
| 0.5896 | 4.99 | 181 | 0.5028 | 0.7993 |
| 0.6168 | 5.99 | 217 | 0.4787 | 0.8100 |
| 0.5627 | 6.98 | 253 | 0.4373 | 0.8319 |
| 0.5654 | 8.0 | 290 | 0.4324 | 0.8299 |
| 0.5204 | 8.99 | 326 | 0.4130 | 0.8319 |
| 0.5581 | 9.99 | 362 | 0.4264 | 0.8241 |
| 0.5232 | 10.98 | 398 | 0.4074 | 0.8294 |
| 0.483 | 12.0 | 435 | 0.3850 | 0.8445 |
| 0.5208 | 12.99 | 471 | 0.3791 | 0.8489 |
| 0.4937 | 13.99 | 507 | 0.3723 | 0.8528 |
| 0.4436 | 14.98 | 543 | 0.3910 | 0.8440 |
| 0.5169 | 16.0 | 580 | 0.3794 | 0.8465 |
| 0.4394 | 16.99 | 616 | 0.3876 | 0.8440 |
| 0.4616 | 17.99 | 652 | 0.3844 | 0.8465 |
| 0.4983 | 18.98 | 688 | 0.3552 | 0.8591 |
| 0.5295 | 20.0 | 725 | 0.3561 | 0.8547 |
| 0.5121 | 20.99 | 761 | 0.3573 | 0.8537 |
| 0.4379 | 21.99 | 797 | 0.3593 | 0.8576 |
| 0.4653 | 22.98 | 833 | 0.3473 | 0.8601 |
| 0.486 | 24.0 | 870 | 0.3473 | 0.8610 |
| 0.4751 | 24.99 | 906 | 0.3638 | 0.8552 |
| 0.4462 | 25.99 | 942 | 0.3533 | 0.8542 |
| 0.4197 | 26.98 | 978 | 0.3464 | 0.8601 |
| 0.4966 | 28.0 | 1015 | 0.3451 | 0.8649 |
| 0.5004 | 28.99 | 1051 | 0.3634 | 0.8508 |
| 0.4156 | 29.99 | 1087 | 0.3723 | 0.8474 |
| 0.4508 | 30.98 | 1123 | 0.3342 | 0.8669 |
| 0.43 | 32.0 | 1160 | 0.3389 | 0.8639 |
| 0.5004 | 32.99 | 1196 | 0.3416 | 0.8615 |
| 0.4927 | 33.99 | 1232 | 0.3545 | 0.8533 |
| 0.4802 | 34.98 | 1268 | 0.3382 | 0.8610 |
| 0.4334 | 36.0 | 1305 | 0.3480 | 0.8542 |
| 0.4557 | 36.99 | 1341 | 0.3392 | 0.8601 |
| 0.4551 | 37.99 | 1377 | 0.3488 | 0.8542 |
| 0.4643 | 38.98 | 1413 | 0.3424 | 0.8586 |
| 0.513 | 39.72 | 1440 | 0.3433 | 0.8576 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.8.1+cu111
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bebo2/B | bebo2 | 2024-02-22T14:15:30Z | 0 | 0 | allennlp | [
"allennlp",
"ar",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"region:us"
] | null | 2024-02-22T14:13:49Z | ---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
language:
- ar
metrics:
- accuracy
library_name: allennlp
--- |
Schnatz65/distilbert-base-uncased-finetuned-emotion | Schnatz65 | 2024-02-22T14:15:15Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-20T14:46:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9251904604606086
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2189
- Accuracy: 0.9255
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8888 | 1.0 | 250 | 0.3284 | 0.9025 | 0.8999 |
| 0.2576 | 2.0 | 500 | 0.2189 | 0.9255 | 0.9252 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.10.3
|
stablediffusionapi/5-sd-v1-5-inpaintingsafet | stablediffusionapi | 2024-02-22T14:09:33Z | 26 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-22T14:07:39Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# 5-sd-v1-5-inpainting.safetensors API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "5-sd-v1-5-inpaintingsafet"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/5-sd-v1-5-inpaintingsafet)
Model link: [View model](https://modelslab.com/models/5-sd-v1-5-inpaintingsafet)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "5-sd-v1-5-inpaintingsafet",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
smyousaf1/my_awesome_food_model | smyousaf1 | 2024-02-22T14:06:50Z | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-22T13:43:25Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6708
- Accuracy: 0.884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7219 | 0.99 | 62 | 2.5741 | 0.822 |
| 1.8365 | 2.0 | 125 | 1.8189 | 0.881 |
| 1.6064 | 2.98 | 186 | 1.6708 | 0.884 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Korla/hsb-mistral | Korla | 2024-02-22T14:03:13Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T13:53:21Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: hsb-mistral-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hsb-mistral-7b
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
## Model description
This is a fine-tune for the upper sorbian language.
## Intended uses & limitations
This model is merely an experiment and simply a plaything; it may generate inaccurate results.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
d-495/falcon-7b-sharded-bf16-finetuned-html-code-generation | d-495 | 2024-02-22T13:59:15Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-02-22T13:24:29Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: falcon-7b-sharded-bf16-finetuned-html-code-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned-html-code-generation
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0686 | 0.18 | 20 | 1.7169 |
| 1.7547 | 0.36 | 40 | 1.3877 |
| 1.6152 | 0.54 | 60 | 1.3192 |
| 1.7433 | 0.72 | 80 | 1.2951 |
| 1.3587 | 0.9 | 100 | 1.2543 |
| 1.3846 | 1.08 | 120 | 1.2234 |
| 1.3242 | 1.26 | 140 | 1.3724 |
| 1.2023 | 1.43 | 160 | 1.2041 |
| 1.118 | 1.61 | 180 | 1.2393 |
| 1.1737 | 1.79 | 200 | 1.1972 |
| 1.3113 | 1.97 | 220 | 1.2141 |
| 0.9142 | 2.15 | 240 | 1.2419 |
| 0.7853 | 2.33 | 260 | 1.2953 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
izaq09/ppo-LunarLander-v2 | izaq09 | 2024-02-22T13:59:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-22T13:58:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.87 +/- 17.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AbstractPerspective/Phi-2_GDPR_Mix_SLERP | AbstractPerspective | 2024-02-22T13:58:37Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T13:56:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Anatg/food_classifier | Anatg | 2024-02-22T13:56:33Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-21T21:14:59Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: Anatg/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Anatg/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3835
- Validation Loss: 0.3573
- Train Accuracy: 0.915
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7451 | 1.5890 | 0.853 | 0 |
| 1.1982 | 0.8135 | 0.888 | 1 |
| 0.7040 | 0.5112 | 0.908 | 2 |
| 0.4854 | 0.4451 | 0.895 | 3 |
| 0.3835 | 0.3573 | 0.915 | 4 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
raimund/distilbert-base-uncased-finetuned-emotion | raimund | 2024-02-22T13:53:48Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-05T11:31:28Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258833558586087
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2153
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8115 | 1.0 | 250 | 0.3220 | 0.912 | 0.9114 |
| 0.2484 | 2.0 | 500 | 0.2153 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
peldrak/segformer-b2-ade-512-512-finetuned-coastTrain | peldrak | 2024-02-22T13:50:01Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/segformer-b2-finetuned-ade-512-512",
"base_model:finetune:nvidia/segformer-b2-finetuned-ade-512-512",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-22T08:49:39Z | ---
license: other
base_model: nvidia/segformer-b2-finetuned-ade-512-512
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b2-ade-512-512-finetuned-coastTrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b2-ade-512-512-finetuned-coastTrain
This model is a fine-tuned version of [nvidia/segformer-b2-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b2-finetuned-ade-512-512) on the peldrak/coastTrain_512-512 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6325
- Mean Iou: 0.7077
- Mean Accuracy: 0.8137
- Overall Accuracy: 0.8816
- Accuracy Water: 0.9348
- Accuracy Whitewater: 0.8020
- Accuracy Sediment: 0.8775
- Accuracy Other Natural Terrain: 0.5017
- Accuracy Vegetation: 0.8953
- Accuracy Development: 0.8739
- Accuracy Unknown: 0.8105
- Iou Water: 0.8677
- Iou Whitewater: 0.6774
- Iou Sediment: 0.7684
- Iou Other Natural Terrain: 0.4116
- Iou Vegetation: 0.8094
- Iou Development: 0.6762
- Iou Unknown: 0.7429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:|
| 1.774 | 0.05 | 20 | 1.6698 | 0.2430 | 0.3629 | 0.5226 | 0.4223 | 0.1819 | 0.2460 | 0.0005 | 0.8964 | 0.3490 | 0.4442 | 0.3799 | 0.1235 | 0.1849 | 0.0005 | 0.4341 | 0.2778 | 0.3001 |
| 1.7174 | 0.11 | 40 | 1.4427 | 0.2786 | 0.3779 | 0.6107 | 0.7533 | 0.0232 | 0.3707 | 0.0 | 0.8382 | 0.3590 | 0.3006 | 0.5472 | 0.0228 | 0.2829 | 0.0 | 0.4830 | 0.3259 | 0.2888 |
| 1.3093 | 0.16 | 60 | 1.2604 | 0.2465 | 0.3397 | 0.6011 | 0.7105 | 0.0004 | 0.0875 | 0.0 | 0.9559 | 0.3212 | 0.3026 | 0.5685 | 0.0004 | 0.0802 | 0.0 | 0.4744 | 0.2994 | 0.3022 |
| 1.1732 | 0.22 | 80 | 1.1278 | 0.2491 | 0.3431 | 0.6274 | 0.8303 | 0.0008 | 0.0586 | 0.0 | 0.9291 | 0.3276 | 0.2555 | 0.6103 | 0.0008 | 0.0560 | 0.0 | 0.5141 | 0.3076 | 0.2547 |
| 1.2471 | 0.27 | 100 | 1.0584 | 0.2724 | 0.3736 | 0.6494 | 0.8989 | 0.0375 | 0.1763 | 0.0 | 0.9176 | 0.5844 | 0.0006 | 0.5888 | 0.0374 | 0.1514 | 0.0 | 0.6245 | 0.5043 | 0.0006 |
| 0.8947 | 0.32 | 120 | 0.9324 | 0.3357 | 0.4425 | 0.7042 | 0.9033 | 0.0096 | 0.2778 | 0.0 | 0.9146 | 0.7000 | 0.2923 | 0.6888 | 0.0096 | 0.2355 | 0.0 | 0.6223 | 0.5019 | 0.2916 |
| 1.1922 | 0.38 | 140 | 0.9745 | 0.3410 | 0.4493 | 0.6821 | 0.7822 | 0.0148 | 0.2773 | 0.0 | 0.9225 | 0.7338 | 0.4144 | 0.6567 | 0.0148 | 0.2420 | 0.0 | 0.5463 | 0.5184 | 0.4086 |
| 1.5695 | 0.43 | 160 | 0.8840 | 0.3772 | 0.5025 | 0.7286 | 0.8957 | 0.0288 | 0.5000 | 0.0 | 0.8538 | 0.8545 | 0.3848 | 0.7277 | 0.0288 | 0.4012 | 0.0 | 0.6443 | 0.4589 | 0.3798 |
| 1.095 | 0.49 | 180 | 0.8596 | 0.3740 | 0.4988 | 0.7191 | 0.8436 | 0.0453 | 0.3715 | 0.0 | 0.8723 | 0.8661 | 0.4927 | 0.7428 | 0.0452 | 0.3100 | 0.0 | 0.5999 | 0.4533 | 0.4665 |
| 1.0495 | 0.54 | 200 | 0.7698 | 0.4265 | 0.5374 | 0.7612 | 0.8938 | 0.0653 | 0.6521 | 0.0 | 0.9071 | 0.7978 | 0.4455 | 0.7438 | 0.0650 | 0.5082 | 0.0 | 0.6655 | 0.5652 | 0.4378 |
| 0.6566 | 0.59 | 220 | 0.7619 | 0.4341 | 0.5580 | 0.7664 | 0.8953 | 0.0462 | 0.7732 | 0.0 | 0.8552 | 0.8443 | 0.4920 | 0.7358 | 0.0458 | 0.5370 | 0.0 | 0.6896 | 0.5557 | 0.4749 |
| 1.177 | 0.65 | 240 | 0.7722 | 0.4191 | 0.5324 | 0.7538 | 0.9128 | 0.0950 | 0.7406 | 0.0 | 0.8794 | 0.7236 | 0.3755 | 0.7230 | 0.0945 | 0.4920 | 0.0 | 0.6798 | 0.5699 | 0.3745 |
| 0.7819 | 0.7 | 260 | 0.7664 | 0.4015 | 0.5032 | 0.7414 | 0.9350 | 0.0834 | 0.3838 | 0.0 | 0.8537 | 0.7514 | 0.5153 | 0.6864 | 0.0821 | 0.2828 | 0.0 | 0.6919 | 0.5665 | 0.5011 |
| 0.7828 | 0.76 | 280 | 0.7064 | 0.4645 | 0.5823 | 0.7789 | 0.8782 | 0.1966 | 0.8202 | 0.0 | 0.9030 | 0.7815 | 0.4968 | 0.7609 | 0.1911 | 0.5326 | 0.0 | 0.6956 | 0.5829 | 0.4883 |
| 0.6092 | 0.81 | 300 | 0.7097 | 0.5114 | 0.6268 | 0.7947 | 0.9039 | 0.5025 | 0.7176 | 0.0 | 0.8930 | 0.8421 | 0.5283 | 0.7870 | 0.4181 | 0.5706 | 0.0 | 0.6882 | 0.6010 | 0.5149 |
| 1.7916 | 0.86 | 320 | 0.6693 | 0.5099 | 0.6222 | 0.7982 | 0.9064 | 0.4097 | 0.8063 | 0.0 | 0.8941 | 0.8109 | 0.5281 | 0.7762 | 0.3720 | 0.5858 | 0.0 | 0.7113 | 0.6113 | 0.5130 |
| 0.7345 | 0.92 | 340 | 0.6743 | 0.5055 | 0.6143 | 0.7979 | 0.9100 | 0.3940 | 0.7794 | 0.0 | 0.9127 | 0.8028 | 0.5009 | 0.7780 | 0.3587 | 0.6001 | 0.0 | 0.7140 | 0.5964 | 0.4915 |
| 0.637 | 0.97 | 360 | 0.6245 | 0.5222 | 0.6306 | 0.8067 | 0.9141 | 0.4131 | 0.7853 | 0.0 | 0.8914 | 0.8234 | 0.5870 | 0.7888 | 0.3803 | 0.6094 | 0.0 | 0.7178 | 0.6178 | 0.5414 |
| 0.8406 | 1.03 | 380 | 0.6479 | 0.5206 | 0.6254 | 0.8043 | 0.9103 | 0.4757 | 0.7567 | 0.0 | 0.9263 | 0.7775 | 0.5312 | 0.7995 | 0.4132 | 0.6059 | 0.0 | 0.7047 | 0.6121 | 0.5091 |
| 0.9531 | 1.08 | 400 | 0.6132 | 0.5323 | 0.6444 | 0.8113 | 0.9146 | 0.5178 | 0.8316 | 0.0 | 0.9081 | 0.8024 | 0.5364 | 0.8080 | 0.4352 | 0.6308 | 0.0 | 0.7162 | 0.6211 | 0.5151 |
| 0.6297 | 1.14 | 420 | 0.6529 | 0.5191 | 0.6509 | 0.7944 | 0.9257 | 0.5420 | 0.8718 | 0.0 | 0.7996 | 0.8808 | 0.5366 | 0.7799 | 0.4613 | 0.5873 | 0.0 | 0.6893 | 0.6036 | 0.5119 |
| 0.4527 | 1.19 | 440 | 0.6055 | 0.5393 | 0.6617 | 0.8136 | 0.9146 | 0.5618 | 0.8581 | 0.0 | 0.8712 | 0.8413 | 0.5852 | 0.8082 | 0.4572 | 0.6102 | 0.0 | 0.7232 | 0.6275 | 0.5490 |
| 0.3314 | 1.24 | 460 | 0.6061 | 0.5426 | 0.6457 | 0.8187 | 0.9256 | 0.4963 | 0.7948 | 0.0 | 0.9067 | 0.7824 | 0.6142 | 0.8083 | 0.4497 | 0.6202 | 0.0 | 0.7307 | 0.6162 | 0.5734 |
| 0.5344 | 1.3 | 480 | 0.6365 | 0.5289 | 0.6559 | 0.8089 | 0.9062 | 0.5875 | 0.8884 | 0.0 | 0.8953 | 0.7999 | 0.5137 | 0.8089 | 0.4373 | 0.6093 | 0.0 | 0.7265 | 0.6329 | 0.4873 |
| 1.0977 | 1.35 | 500 | 0.6129 | 0.5325 | 0.6456 | 0.8098 | 0.9402 | 0.6061 | 0.7829 | 0.0 | 0.8902 | 0.7751 | 0.5248 | 0.7957 | 0.4603 | 0.6199 | 0.0 | 0.7236 | 0.6247 | 0.5035 |
| 0.5028 | 1.41 | 520 | 0.6530 | 0.5303 | 0.6531 | 0.8049 | 0.9023 | 0.6113 | 0.7989 | 0.0 | 0.8993 | 0.8574 | 0.5029 | 0.8103 | 0.4972 | 0.6195 | 0.0 | 0.7036 | 0.5900 | 0.4914 |
| 0.4093 | 1.46 | 540 | 0.6043 | 0.5327 | 0.6330 | 0.8096 | 0.9220 | 0.5348 | 0.7972 | 0.0 | 0.9261 | 0.7064 | 0.5442 | 0.8086 | 0.4678 | 0.6166 | 0.0 | 0.7056 | 0.6094 | 0.5207 |
| 0.4392 | 1.51 | 560 | 0.5532 | 0.5729 | 0.6868 | 0.8318 | 0.9135 | 0.6389 | 0.8304 | 0.0 | 0.8862 | 0.8367 | 0.7019 | 0.8082 | 0.5218 | 0.6223 | 0.0 | 0.7509 | 0.6559 | 0.6515 |
| 0.4156 | 1.57 | 580 | 0.5921 | 0.5594 | 0.6634 | 0.8223 | 0.9162 | 0.6216 | 0.8008 | 0.0 | 0.9240 | 0.8058 | 0.5757 | 0.8100 | 0.5296 | 0.6424 | 0.0 | 0.7249 | 0.6494 | 0.5593 |
| 0.5817 | 1.62 | 600 | 0.6145 | 0.5603 | 0.6917 | 0.8233 | 0.9267 | 0.7580 | 0.8471 | 0.0 | 0.8722 | 0.8801 | 0.5577 | 0.8163 | 0.5487 | 0.6300 | 0.0 | 0.7402 | 0.6417 | 0.5450 |
| 0.3551 | 1.68 | 620 | 0.5390 | 0.5859 | 0.7156 | 0.8361 | 0.9135 | 0.8256 | 0.8467 | 0.0 | 0.8715 | 0.8494 | 0.7025 | 0.8289 | 0.5863 | 0.6466 | 0.0 | 0.7419 | 0.6485 | 0.6492 |
| 0.453 | 1.73 | 640 | 0.5088 | 0.5999 | 0.7029 | 0.8466 | 0.9237 | 0.7069 | 0.8011 | 0.0 | 0.9102 | 0.8441 | 0.7345 | 0.8305 | 0.5881 | 0.6650 | 0.0 | 0.7576 | 0.6760 | 0.6822 |
| 1.2453 | 1.78 | 660 | 0.5524 | 0.5847 | 0.6951 | 0.8369 | 0.9104 | 0.6767 | 0.8362 | 0.0 | 0.9038 | 0.8516 | 0.6871 | 0.8318 | 0.5614 | 0.6554 | 0.0 | 0.7336 | 0.6489 | 0.6621 |
| 0.7462 | 1.84 | 680 | 0.5106 | 0.6009 | 0.7035 | 0.8458 | 0.9255 | 0.7246 | 0.8720 | 0.0 | 0.9117 | 0.7932 | 0.6975 | 0.8210 | 0.6033 | 0.6703 | 0.0 | 0.7621 | 0.6712 | 0.6782 |
| 0.5394 | 1.89 | 700 | 0.5511 | 0.6024 | 0.7181 | 0.8431 | 0.8944 | 0.7458 | 0.8676 | 0.0002 | 0.8938 | 0.8785 | 0.7462 | 0.8143 | 0.6177 | 0.6510 | 0.0002 | 0.7569 | 0.6770 | 0.7000 |
| 0.687 | 1.95 | 720 | 0.5729 | 0.5931 | 0.7128 | 0.8334 | 0.8541 | 0.7522 | 0.8466 | 0.0 | 0.8989 | 0.8057 | 0.8324 | 0.7866 | 0.6072 | 0.6188 | 0.0 | 0.7447 | 0.6640 | 0.7302 |
| 1.8817 | 2.0 | 740 | 0.5696 | 0.5922 | 0.7055 | 0.8409 | 0.9274 | 0.7567 | 0.7970 | 0.0004 | 0.8907 | 0.8681 | 0.6980 | 0.8139 | 0.5882 | 0.6528 | 0.0004 | 0.7629 | 0.6662 | 0.6612 |
| 0.3863 | 2.05 | 760 | 0.5317 | 0.5858 | 0.6944 | 0.8390 | 0.9085 | 0.7019 | 0.8568 | 0.0004 | 0.9237 | 0.7845 | 0.6850 | 0.8275 | 0.5633 | 0.6667 | 0.0004 | 0.7480 | 0.6380 | 0.6568 |
| 0.4857 | 2.11 | 780 | 0.5299 | 0.5867 | 0.7092 | 0.8366 | 0.8941 | 0.7602 | 0.8367 | 0.0113 | 0.9016 | 0.8441 | 0.7162 | 0.8195 | 0.5735 | 0.6947 | 0.0113 | 0.7573 | 0.6176 | 0.6331 |
| 0.4574 | 2.16 | 800 | 0.4897 | 0.6125 | 0.7202 | 0.8538 | 0.9369 | 0.7566 | 0.8603 | 0.0110 | 0.8903 | 0.8365 | 0.7495 | 0.8322 | 0.5830 | 0.7061 | 0.0110 | 0.7690 | 0.6780 | 0.7082 |
| 1.2893 | 2.22 | 820 | 0.4904 | 0.6083 | 0.7060 | 0.8501 | 0.9106 | 0.7215 | 0.7903 | 0.0066 | 0.9298 | 0.7943 | 0.7891 | 0.8376 | 0.5719 | 0.7046 | 0.0066 | 0.7425 | 0.6604 | 0.7342 |
| 0.3318 | 2.27 | 840 | 0.5034 | 0.6084 | 0.7443 | 0.8465 | 0.9073 | 0.8481 | 0.9087 | 0.0175 | 0.8324 | 0.8653 | 0.8304 | 0.8301 | 0.6099 | 0.6466 | 0.0175 | 0.7525 | 0.6664 | 0.7359 |
| 0.7274 | 2.32 | 860 | 0.5037 | 0.6095 | 0.7283 | 0.8495 | 0.9094 | 0.7908 | 0.8544 | 0.0246 | 0.8896 | 0.8579 | 0.7715 | 0.8239 | 0.6145 | 0.6409 | 0.0246 | 0.7778 | 0.6544 | 0.7302 |
| 0.2701 | 2.38 | 880 | 0.4549 | 0.6217 | 0.7399 | 0.8567 | 0.9149 | 0.8438 | 0.8606 | 0.0327 | 0.8966 | 0.8430 | 0.7875 | 0.8370 | 0.6066 | 0.6989 | 0.0327 | 0.7736 | 0.6696 | 0.7338 |
| 0.7689 | 2.43 | 900 | 0.4638 | 0.6372 | 0.7440 | 0.8630 | 0.9084 | 0.8053 | 0.8739 | 0.0723 | 0.9211 | 0.8214 | 0.8054 | 0.8440 | 0.6305 | 0.6970 | 0.0723 | 0.7779 | 0.6641 | 0.7749 |
| 0.9057 | 2.49 | 920 | 0.4861 | 0.6244 | 0.7361 | 0.8576 | 0.9113 | 0.7860 | 0.8618 | 0.0268 | 0.8999 | 0.8763 | 0.7906 | 0.8372 | 0.6200 | 0.6959 | 0.0268 | 0.7706 | 0.6673 | 0.7534 |
| 0.4402 | 2.54 | 940 | 0.4684 | 0.6285 | 0.7347 | 0.8587 | 0.9232 | 0.8211 | 0.8214 | 0.0369 | 0.8997 | 0.8182 | 0.8225 | 0.8471 | 0.6360 | 0.7181 | 0.0369 | 0.7598 | 0.6635 | 0.7384 |
| 0.5323 | 2.59 | 960 | 0.5211 | 0.6238 | 0.7290 | 0.8542 | 0.9292 | 0.7797 | 0.8183 | 0.0548 | 0.8885 | 0.8458 | 0.7864 | 0.8261 | 0.6093 | 0.6929 | 0.0548 | 0.7610 | 0.6825 | 0.7401 |
| 0.3023 | 2.65 | 980 | 0.5354 | 0.6030 | 0.7154 | 0.8478 | 0.9187 | 0.7808 | 0.8493 | 0.0235 | 0.9198 | 0.8300 | 0.6854 | 0.8381 | 0.5942 | 0.6901 | 0.0235 | 0.7623 | 0.6448 | 0.6679 |
| 0.4543 | 2.7 | 1000 | 0.5198 | 0.6117 | 0.7449 | 0.8484 | 0.9163 | 0.8640 | 0.8625 | 0.0540 | 0.8592 | 0.9093 | 0.7492 | 0.8439 | 0.6254 | 0.6749 | 0.0539 | 0.7631 | 0.6250 | 0.6962 |
| 0.3895 | 2.76 | 1020 | 0.5174 | 0.6274 | 0.7447 | 0.8562 | 0.9323 | 0.8145 | 0.8672 | 0.0671 | 0.8619 | 0.8897 | 0.7806 | 0.8467 | 0.6486 | 0.6881 | 0.0670 | 0.7666 | 0.6495 | 0.7251 |
| 0.2782 | 2.81 | 1040 | 0.4946 | 0.6339 | 0.7520 | 0.8574 | 0.9124 | 0.8106 | 0.8587 | 0.0948 | 0.8764 | 0.9153 | 0.7958 | 0.8325 | 0.6386 | 0.6997 | 0.0946 | 0.7773 | 0.6642 | 0.7307 |
| 0.4942 | 2.86 | 1060 | 0.5120 | 0.6234 | 0.7283 | 0.8505 | 0.9292 | 0.8251 | 0.8523 | 0.0801 | 0.9149 | 0.8401 | 0.6565 | 0.8105 | 0.6503 | 0.7118 | 0.0797 | 0.7799 | 0.6880 | 0.6436 |
| 0.4381 | 2.92 | 1080 | 0.4983 | 0.6212 | 0.7341 | 0.8497 | 0.9364 | 0.8388 | 0.8506 | 0.1094 | 0.8976 | 0.8516 | 0.6543 | 0.8150 | 0.6122 | 0.7030 | 0.1088 | 0.7781 | 0.6929 | 0.6383 |
| 0.3068 | 2.97 | 1100 | 0.4810 | 0.6381 | 0.7514 | 0.8591 | 0.9265 | 0.8078 | 0.9013 | 0.0818 | 0.8630 | 0.8842 | 0.7953 | 0.8313 | 0.6523 | 0.7002 | 0.0815 | 0.7721 | 0.6849 | 0.7446 |
| 0.359 | 3.03 | 1120 | 0.4442 | 0.6598 | 0.7630 | 0.8697 | 0.9335 | 0.8282 | 0.8824 | 0.1721 | 0.9068 | 0.8465 | 0.7713 | 0.8409 | 0.6478 | 0.7214 | 0.1701 | 0.7973 | 0.6896 | 0.7517 |
| 0.9712 | 3.08 | 1140 | 0.4595 | 0.6490 | 0.7589 | 0.8595 | 0.9024 | 0.7823 | 0.8386 | 0.1947 | 0.8985 | 0.8891 | 0.8066 | 0.8274 | 0.6393 | 0.6980 | 0.1919 | 0.7811 | 0.6527 | 0.7527 |
| 0.9749 | 3.14 | 1160 | 0.4557 | 0.6531 | 0.7585 | 0.8647 | 0.9180 | 0.8494 | 0.8345 | 0.1702 | 0.9225 | 0.8509 | 0.7639 | 0.8341 | 0.6295 | 0.7190 | 0.1675 | 0.7838 | 0.6935 | 0.7446 |
| 0.2994 | 3.19 | 1180 | 0.4756 | 0.6362 | 0.7491 | 0.8622 | 0.9186 | 0.8165 | 0.8692 | 0.0591 | 0.8875 | 0.8784 | 0.8145 | 0.8459 | 0.6472 | 0.6854 | 0.0587 | 0.7768 | 0.6755 | 0.7638 |
| 0.2181 | 3.24 | 1200 | 0.4904 | 0.6266 | 0.7280 | 0.8585 | 0.9324 | 0.7221 | 0.8585 | 0.0592 | 0.9003 | 0.8704 | 0.7530 | 0.8398 | 0.6149 | 0.6996 | 0.0563 | 0.7772 | 0.6744 | 0.7238 |
| 0.5907 | 3.3 | 1220 | 0.5001 | 0.6422 | 0.7463 | 0.8642 | 0.9217 | 0.7183 | 0.8707 | 0.1054 | 0.8825 | 0.8751 | 0.8503 | 0.8427 | 0.6143 | 0.7039 | 0.1013 | 0.7779 | 0.6731 | 0.7825 |
| 0.3174 | 3.35 | 1240 | 0.4629 | 0.6486 | 0.7537 | 0.8596 | 0.9221 | 0.8126 | 0.8176 | 0.2011 | 0.9011 | 0.8308 | 0.7909 | 0.8320 | 0.6382 | 0.6930 | 0.1973 | 0.7762 | 0.6748 | 0.7287 |
| 0.9913 | 3.41 | 1260 | 0.5059 | 0.6454 | 0.7423 | 0.8616 | 0.9192 | 0.7211 | 0.8417 | 0.1734 | 0.9158 | 0.8354 | 0.7898 | 0.8376 | 0.6134 | 0.6851 | 0.1673 | 0.7757 | 0.6847 | 0.7541 |
| 0.594 | 3.46 | 1280 | 0.4978 | 0.6499 | 0.7564 | 0.8641 | 0.9234 | 0.7176 | 0.8996 | 0.1993 | 0.8755 | 0.8285 | 0.8510 | 0.8385 | 0.5961 | 0.6907 | 0.1834 | 0.7839 | 0.6827 | 0.7742 |
| 0.3606 | 3.51 | 1300 | 0.4553 | 0.6745 | 0.7827 | 0.8730 | 0.9278 | 0.8154 | 0.8644 | 0.2862 | 0.8900 | 0.8579 | 0.8372 | 0.8536 | 0.6281 | 0.7173 | 0.2693 | 0.7924 | 0.6784 | 0.7821 |
| 0.2496 | 3.57 | 1320 | 0.4779 | 0.6537 | 0.7524 | 0.8650 | 0.9381 | 0.7943 | 0.8497 | 0.2063 | 0.9145 | 0.8173 | 0.7467 | 0.8372 | 0.6416 | 0.7134 | 0.2002 | 0.7955 | 0.6703 | 0.7176 |
| 0.4585 | 3.62 | 1340 | 0.4831 | 0.6602 | 0.7539 | 0.8677 | 0.9365 | 0.7896 | 0.8694 | 0.2350 | 0.9203 | 0.7235 | 0.8029 | 0.8464 | 0.6458 | 0.7212 | 0.2264 | 0.7868 | 0.6337 | 0.7611 |
| 0.3049 | 3.68 | 1360 | 0.4651 | 0.6745 | 0.7803 | 0.8699 | 0.9383 | 0.8015 | 0.8047 | 0.3400 | 0.8881 | 0.8867 | 0.8027 | 0.8571 | 0.6454 | 0.7250 | 0.3120 | 0.7901 | 0.6405 | 0.7512 |
| 0.8292 | 3.73 | 1380 | 0.4662 | 0.6804 | 0.7807 | 0.8739 | 0.9435 | 0.7807 | 0.8659 | 0.3394 | 0.8889 | 0.8257 | 0.8207 | 0.8536 | 0.6421 | 0.7443 | 0.3049 | 0.7961 | 0.6561 | 0.7661 |
| 0.3299 | 3.78 | 1400 | 0.4314 | 0.6869 | 0.7926 | 0.8747 | 0.9229 | 0.7418 | 0.8517 | 0.4578 | 0.9058 | 0.8336 | 0.8349 | 0.8604 | 0.6217 | 0.7513 | 0.3634 | 0.7984 | 0.6544 | 0.7589 |
| 0.2797 | 3.84 | 1420 | 0.4894 | 0.6547 | 0.7635 | 0.8597 | 0.9276 | 0.7365 | 0.8303 | 0.3512 | 0.9020 | 0.8735 | 0.7233 | 0.8455 | 0.6126 | 0.7413 | 0.2852 | 0.7865 | 0.6307 | 0.6810 |
| 0.4773 | 3.89 | 1440 | 0.4983 | 0.6631 | 0.7652 | 0.8687 | 0.9388 | 0.8061 | 0.8921 | 0.2478 | 0.9071 | 0.8083 | 0.7560 | 0.8528 | 0.6477 | 0.7304 | 0.2323 | 0.7924 | 0.6646 | 0.7211 |
| 1.1257 | 3.95 | 1460 | 0.4759 | 0.6409 | 0.7308 | 0.8658 | 0.9443 | 0.7182 | 0.8320 | 0.1004 | 0.9206 | 0.8346 | 0.7656 | 0.8452 | 0.6081 | 0.7408 | 0.0951 | 0.7865 | 0.6783 | 0.7324 |
| 0.4146 | 4.0 | 1480 | 0.4447 | 0.6743 | 0.7708 | 0.8730 | 0.9185 | 0.7105 | 0.8995 | 0.3146 | 0.9137 | 0.7890 | 0.8495 | 0.8499 | 0.6032 | 0.7433 | 0.2770 | 0.7908 | 0.6761 | 0.7800 |
| 0.247 | 4.05 | 1500 | 0.4617 | 0.6677 | 0.7749 | 0.8645 | 0.9282 | 0.7519 | 0.8744 | 0.3599 | 0.8824 | 0.8310 | 0.7966 | 0.8370 | 0.6169 | 0.7111 | 0.3129 | 0.7859 | 0.6736 | 0.7364 |
| 0.2868 | 4.11 | 1520 | 0.4976 | 0.6640 | 0.7671 | 0.8623 | 0.9411 | 0.7546 | 0.8310 | 0.3615 | 0.8860 | 0.8316 | 0.7639 | 0.8324 | 0.6275 | 0.7132 | 0.3215 | 0.7919 | 0.6548 | 0.7071 |
| 0.4645 | 4.16 | 1540 | 0.4701 | 0.6657 | 0.7674 | 0.8686 | 0.9253 | 0.7647 | 0.8576 | 0.2678 | 0.9047 | 0.8506 | 0.8010 | 0.8455 | 0.6336 | 0.7184 | 0.2481 | 0.7931 | 0.6830 | 0.7384 |
| 0.3623 | 4.22 | 1560 | 0.4887 | 0.6773 | 0.7871 | 0.8734 | 0.9318 | 0.8109 | 0.8683 | 0.3052 | 0.8794 | 0.8817 | 0.8324 | 0.8523 | 0.6352 | 0.7358 | 0.2812 | 0.7979 | 0.6737 | 0.7650 |
| 0.8344 | 4.27 | 1580 | 0.4911 | 0.6732 | 0.7857 | 0.8728 | 0.9205 | 0.8140 | 0.8770 | 0.2531 | 0.8766 | 0.9037 | 0.8552 | 0.8564 | 0.6502 | 0.7171 | 0.2384 | 0.7906 | 0.6723 | 0.7875 |
| 0.1409 | 4.32 | 1600 | 0.4735 | 0.6764 | 0.7763 | 0.8757 | 0.9316 | 0.8224 | 0.8617 | 0.2534 | 0.9103 | 0.8249 | 0.8299 | 0.8535 | 0.6488 | 0.7286 | 0.2430 | 0.8013 | 0.6948 | 0.7645 |
| 0.4398 | 4.38 | 1620 | 0.4830 | 0.6820 | 0.7941 | 0.8709 | 0.9065 | 0.7114 | 0.8315 | 0.5040 | 0.9072 | 0.8489 | 0.8494 | 0.8530 | 0.5943 | 0.7336 | 0.3566 | 0.7883 | 0.6695 | 0.7789 |
| 0.3379 | 4.43 | 1640 | 0.4664 | 0.6838 | 0.7842 | 0.8749 | 0.9217 | 0.6983 | 0.8737 | 0.4312 | 0.9116 | 0.8084 | 0.8444 | 0.8576 | 0.5941 | 0.7437 | 0.3459 | 0.7933 | 0.6760 | 0.7760 |
| 0.2342 | 4.49 | 1660 | 0.4703 | 0.6771 | 0.7869 | 0.8721 | 0.9355 | 0.7352 | 0.8944 | 0.4283 | 0.8918 | 0.8368 | 0.7864 | 0.8629 | 0.5971 | 0.7208 | 0.3525 | 0.7961 | 0.6644 | 0.7461 |
| 0.148 | 4.54 | 1680 | 0.5671 | 0.6553 | 0.7752 | 0.8594 | 0.9377 | 0.7633 | 0.8931 | 0.3852 | 0.8725 | 0.8710 | 0.7036 | 0.8569 | 0.6201 | 0.7142 | 0.3288 | 0.7899 | 0.6168 | 0.6603 |
| 0.202 | 4.59 | 1700 | 0.5108 | 0.6819 | 0.7964 | 0.8695 | 0.9365 | 0.7900 | 0.8888 | 0.4566 | 0.8726 | 0.8431 | 0.7876 | 0.8467 | 0.6424 | 0.7405 | 0.3440 | 0.7963 | 0.6810 | 0.7222 |
| 0.2107 | 4.65 | 1720 | 0.4934 | 0.6838 | 0.7900 | 0.8730 | 0.9253 | 0.7301 | 0.8546 | 0.4374 | 0.8957 | 0.8634 | 0.8239 | 0.8512 | 0.5960 | 0.7502 | 0.3568 | 0.7942 | 0.6805 | 0.7574 |
| 0.5085 | 4.7 | 1740 | 0.5234 | 0.6806 | 0.7843 | 0.8705 | 0.9371 | 0.7440 | 0.8956 | 0.3814 | 0.8739 | 0.8448 | 0.8133 | 0.8413 | 0.6311 | 0.7240 | 0.3318 | 0.7942 | 0.6990 | 0.7432 |
| 0.3162 | 4.76 | 1760 | 0.5976 | 0.6440 | 0.7734 | 0.8483 | 0.9272 | 0.7952 | 0.9161 | 0.3980 | 0.8581 | 0.8645 | 0.6544 | 0.8406 | 0.6433 | 0.7053 | 0.3131 | 0.7712 | 0.6232 | 0.6116 |
| 0.4468 | 4.81 | 1780 | 0.5528 | 0.6616 | 0.7922 | 0.8563 | 0.9287 | 0.7599 | 0.8737 | 0.5399 | 0.8717 | 0.9086 | 0.6627 | 0.8518 | 0.6196 | 0.7356 | 0.3876 | 0.7842 | 0.6112 | 0.6411 |
| 0.2914 | 4.86 | 1800 | 0.4448 | 0.6855 | 0.8009 | 0.8731 | 0.9382 | 0.7935 | 0.8643 | 0.4820 | 0.8858 | 0.8662 | 0.7763 | 0.8616 | 0.6218 | 0.7468 | 0.3700 | 0.7976 | 0.6694 | 0.7314 |
| 0.3376 | 4.92 | 1820 | 0.4391 | 0.6927 | 0.7955 | 0.8774 | 0.9252 | 0.7407 | 0.8542 | 0.4768 | 0.9156 | 0.8381 | 0.8182 | 0.8606 | 0.6084 | 0.7608 | 0.3708 | 0.7959 | 0.6706 | 0.7822 |
| 0.3751 | 4.97 | 1840 | 0.4395 | 0.7040 | 0.8084 | 0.8803 | 0.9272 | 0.7596 | 0.8446 | 0.4874 | 0.8938 | 0.9010 | 0.8449 | 0.8613 | 0.6312 | 0.7617 | 0.3956 | 0.7935 | 0.6819 | 0.8028 |
| 0.178 | 5.03 | 1860 | 0.4407 | 0.6937 | 0.8031 | 0.8773 | 0.9406 | 0.7685 | 0.8876 | 0.4798 | 0.8827 | 0.8525 | 0.8103 | 0.8658 | 0.6382 | 0.7346 | 0.3686 | 0.7968 | 0.6835 | 0.7681 |
| 0.3075 | 5.08 | 1880 | 0.4485 | 0.6878 | 0.7883 | 0.8770 | 0.9411 | 0.6814 | 0.8326 | 0.5002 | 0.9071 | 0.8429 | 0.8130 | 0.8644 | 0.5854 | 0.7454 | 0.3662 | 0.7967 | 0.6784 | 0.7783 |
| 0.3155 | 5.14 | 1900 | 0.4399 | 0.6962 | 0.8050 | 0.8761 | 0.9352 | 0.8200 | 0.8642 | 0.4813 | 0.8897 | 0.8217 | 0.8226 | 0.8543 | 0.6421 | 0.7495 | 0.3825 | 0.7953 | 0.6951 | 0.7549 |
| 0.2081 | 5.19 | 1920 | 0.4378 | 0.7054 | 0.8064 | 0.8827 | 0.9331 | 0.8320 | 0.8833 | 0.4178 | 0.9041 | 0.8469 | 0.8278 | 0.8627 | 0.6702 | 0.7516 | 0.3508 | 0.7990 | 0.7127 | 0.7910 |
| 0.4906 | 5.24 | 1940 | 0.4397 | 0.7017 | 0.8146 | 0.8772 | 0.9188 | 0.8495 | 0.8683 | 0.4437 | 0.8827 | 0.8972 | 0.8419 | 0.8578 | 0.6533 | 0.7541 | 0.3838 | 0.7843 | 0.7012 | 0.7773 |
| 0.1855 | 5.3 | 1960 | 0.4616 | 0.6914 | 0.8004 | 0.8745 | 0.9245 | 0.7732 | 0.8977 | 0.4276 | 0.8822 | 0.8784 | 0.8191 | 0.8531 | 0.6409 | 0.7521 | 0.3624 | 0.7931 | 0.6883 | 0.7500 |
| 0.3019 | 5.35 | 1980 | 0.4658 | 0.6902 | 0.7901 | 0.8740 | 0.9221 | 0.7511 | 0.8538 | 0.4708 | 0.9197 | 0.7969 | 0.8167 | 0.8518 | 0.6226 | 0.7643 | 0.3793 | 0.7932 | 0.6694 | 0.7510 |
| 0.2349 | 5.41 | 2000 | 0.4969 | 0.6752 | 0.7835 | 0.8682 | 0.9260 | 0.7555 | 0.8678 | 0.4031 | 0.8963 | 0.8608 | 0.7746 | 0.8603 | 0.6412 | 0.7587 | 0.3488 | 0.7958 | 0.6117 | 0.7098 |
| 0.6845 | 5.46 | 2020 | 0.4809 | 0.6869 | 0.8029 | 0.8713 | 0.9326 | 0.8102 | 0.8850 | 0.4438 | 0.8600 | 0.8531 | 0.8357 | 0.8624 | 0.6586 | 0.7590 | 0.3593 | 0.7824 | 0.6246 | 0.7619 |
| 0.1687 | 5.51 | 2040 | 0.4282 | 0.7010 | 0.8068 | 0.8790 | 0.9320 | 0.8177 | 0.8642 | 0.4669 | 0.9015 | 0.8591 | 0.8059 | 0.8592 | 0.6517 | 0.7646 | 0.3818 | 0.7980 | 0.6817 | 0.7702 |
| 0.3555 | 5.57 | 2060 | 0.4627 | 0.6923 | 0.7981 | 0.8755 | 0.9458 | 0.7103 | 0.8504 | 0.5134 | 0.8782 | 0.8827 | 0.8058 | 0.8554 | 0.6083 | 0.7531 | 0.3919 | 0.7967 | 0.6811 | 0.7593 |
| 0.3006 | 5.62 | 2080 | 0.4758 | 0.6888 | 0.8039 | 0.8736 | 0.9239 | 0.7116 | 0.8500 | 0.5832 | 0.8970 | 0.8409 | 0.8206 | 0.8595 | 0.5948 | 0.7377 | 0.3947 | 0.7952 | 0.6844 | 0.7553 |
| 0.1909 | 5.68 | 2100 | 0.5164 | 0.6708 | 0.7887 | 0.8634 | 0.9366 | 0.7535 | 0.8412 | 0.5040 | 0.8853 | 0.8875 | 0.7126 | 0.8649 | 0.6044 | 0.7471 | 0.3775 | 0.7730 | 0.6505 | 0.6781 |
| 0.1298 | 5.73 | 2120 | 0.4719 | 0.6907 | 0.8048 | 0.8754 | 0.9197 | 0.7386 | 0.8630 | 0.5636 | 0.9034 | 0.7945 | 0.8508 | 0.8587 | 0.5945 | 0.7715 | 0.3663 | 0.7952 | 0.6657 | 0.7833 |
| 0.4167 | 5.78 | 2140 | 0.4801 | 0.6937 | 0.8060 | 0.8764 | 0.9225 | 0.7367 | 0.8686 | 0.5372 | 0.8945 | 0.8355 | 0.8471 | 0.8622 | 0.6013 | 0.7672 | 0.3903 | 0.7943 | 0.6654 | 0.7749 |
| 0.1717 | 5.84 | 2160 | 0.5276 | 0.6703 | 0.7827 | 0.8624 | 0.9348 | 0.7363 | 0.8846 | 0.4353 | 0.8702 | 0.8796 | 0.7382 | 0.8528 | 0.6274 | 0.7190 | 0.3811 | 0.7778 | 0.6236 | 0.7102 |
| 0.2303 | 5.89 | 2180 | 0.4999 | 0.6868 | 0.7950 | 0.8726 | 0.9112 | 0.7915 | 0.8782 | 0.4238 | 0.9146 | 0.8532 | 0.7925 | 0.8551 | 0.6426 | 0.7221 | 0.3567 | 0.7900 | 0.6791 | 0.7622 |
| 0.3475 | 5.95 | 2200 | 0.4783 | 0.6961 | 0.8122 | 0.8730 | 0.9255 | 0.8479 | 0.8630 | 0.5174 | 0.8952 | 0.8707 | 0.7654 | 0.8586 | 0.6580 | 0.7578 | 0.3956 | 0.7845 | 0.6948 | 0.7237 |
| 0.3662 | 6.0 | 2220 | 0.5238 | 0.6894 | 0.7961 | 0.8685 | 0.9335 | 0.8246 | 0.8653 | 0.4771 | 0.8993 | 0.8467 | 0.7261 | 0.8543 | 0.6671 | 0.7621 | 0.3943 | 0.7742 | 0.6805 | 0.6932 |
| 0.318 | 6.05 | 2240 | 0.4639 | 0.7007 | 0.8073 | 0.8766 | 0.9228 | 0.8070 | 0.8638 | 0.4879 | 0.9013 | 0.8522 | 0.8162 | 0.8630 | 0.6605 | 0.7610 | 0.3921 | 0.7846 | 0.6852 | 0.7586 |
| 0.2406 | 6.11 | 2260 | 0.4532 | 0.7060 | 0.8107 | 0.8807 | 0.9294 | 0.8086 | 0.8812 | 0.4703 | 0.8904 | 0.8375 | 0.8575 | 0.8600 | 0.6627 | 0.7584 | 0.3876 | 0.7963 | 0.6931 | 0.7843 |
| 0.2521 | 6.16 | 2280 | 0.5303 | 0.6836 | 0.8013 | 0.8649 | 0.9314 | 0.7973 | 0.8723 | 0.5140 | 0.8745 | 0.8990 | 0.7206 | 0.8617 | 0.6634 | 0.7422 | 0.3974 | 0.7653 | 0.6672 | 0.6877 |
| 0.3682 | 6.22 | 2300 | 0.4872 | 0.6991 | 0.8096 | 0.8769 | 0.9385 | 0.8007 | 0.8556 | 0.5189 | 0.8812 | 0.8445 | 0.8276 | 0.8604 | 0.6454 | 0.7445 | 0.3880 | 0.7897 | 0.6958 | 0.7700 |
| 0.2375 | 6.27 | 2320 | 0.5122 | 0.7005 | 0.8031 | 0.8764 | 0.9361 | 0.7812 | 0.8391 | 0.5068 | 0.8956 | 0.8495 | 0.8133 | 0.8528 | 0.6484 | 0.7545 | 0.4079 | 0.7950 | 0.6965 | 0.7482 |
| 0.2016 | 6.32 | 2340 | 0.5179 | 0.6681 | 0.7813 | 0.8652 | 0.9287 | 0.8025 | 0.9013 | 0.3811 | 0.9038 | 0.8525 | 0.6991 | 0.8573 | 0.6532 | 0.7251 | 0.3362 | 0.7960 | 0.6402 | 0.6690 |
| 0.3004 | 6.38 | 2360 | 0.5304 | 0.6753 | 0.7917 | 0.8656 | 0.9263 | 0.8587 | 0.8450 | 0.4287 | 0.9053 | 0.8642 | 0.7137 | 0.8554 | 0.6586 | 0.7352 | 0.3698 | 0.7921 | 0.6325 | 0.6834 |
| 0.2964 | 6.43 | 2380 | 0.4926 | 0.7003 | 0.8086 | 0.8749 | 0.9319 | 0.8082 | 0.8491 | 0.5135 | 0.8856 | 0.8568 | 0.8154 | 0.8478 | 0.6560 | 0.7508 | 0.4095 | 0.7935 | 0.6932 | 0.7513 |
| 0.3581 | 6.49 | 2400 | 0.4978 | 0.7056 | 0.8147 | 0.8789 | 0.9219 | 0.8106 | 0.8758 | 0.4859 | 0.8868 | 0.8750 | 0.8467 | 0.8554 | 0.6623 | 0.7478 | 0.4092 | 0.7976 | 0.6903 | 0.7767 |
| 0.2314 | 6.54 | 2420 | 0.5135 | 0.7069 | 0.8124 | 0.8806 | 0.9216 | 0.8201 | 0.8698 | 0.4748 | 0.8957 | 0.8311 | 0.8739 | 0.8529 | 0.6531 | 0.7548 | 0.3895 | 0.7970 | 0.7054 | 0.7959 |
| 0.2579 | 6.59 | 2440 | 0.5198 | 0.7065 | 0.8177 | 0.8774 | 0.9268 | 0.8008 | 0.8579 | 0.5449 | 0.8749 | 0.8501 | 0.8685 | 0.8587 | 0.6586 | 0.7613 | 0.4115 | 0.7841 | 0.7070 | 0.7645 |
| 0.2868 | 6.65 | 2460 | 0.4945 | 0.7088 | 0.8165 | 0.8816 | 0.9211 | 0.7457 | 0.8820 | 0.5451 | 0.8926 | 0.8669 | 0.8625 | 0.8590 | 0.6359 | 0.7593 | 0.4065 | 0.7987 | 0.7092 | 0.7932 |
| 0.1662 | 6.7 | 2480 | 0.5108 | 0.6747 | 0.7882 | 0.8665 | 0.9180 | 0.7134 | 0.8785 | 0.4941 | 0.8982 | 0.8366 | 0.7787 | 0.8438 | 0.5949 | 0.7428 | 0.3568 | 0.7933 | 0.6736 | 0.7176 |
| 0.4887 | 6.76 | 2500 | 0.5402 | 0.6787 | 0.7846 | 0.8676 | 0.9325 | 0.7387 | 0.8802 | 0.4523 | 0.8937 | 0.8354 | 0.7598 | 0.8441 | 0.6263 | 0.7582 | 0.3668 | 0.7947 | 0.6628 | 0.6979 |
| 0.2958 | 6.81 | 2520 | 0.5463 | 0.6879 | 0.8021 | 0.8693 | 0.9166 | 0.7788 | 0.8735 | 0.4952 | 0.8866 | 0.8606 | 0.8033 | 0.8506 | 0.6427 | 0.7519 | 0.3907 | 0.7843 | 0.6571 | 0.7384 |
| 0.1797 | 6.86 | 2540 | 0.5422 | 0.6755 | 0.7921 | 0.8630 | 0.9257 | 0.8059 | 0.8971 | 0.4600 | 0.8825 | 0.8415 | 0.7323 | 0.8440 | 0.6593 | 0.7343 | 0.3841 | 0.7896 | 0.6389 | 0.6784 |
| 0.3753 | 6.92 | 2560 | 0.5063 | 0.6908 | 0.8078 | 0.8715 | 0.9315 | 0.8338 | 0.8600 | 0.5039 | 0.8815 | 0.8582 | 0.7853 | 0.8502 | 0.6637 | 0.7301 | 0.3682 | 0.7913 | 0.6859 | 0.7463 |
| 0.2522 | 6.97 | 2580 | 0.5076 | 0.7051 | 0.8176 | 0.8795 | 0.9245 | 0.8097 | 0.8640 | 0.5244 | 0.8831 | 0.8414 | 0.8761 | 0.8574 | 0.6570 | 0.7438 | 0.3786 | 0.7951 | 0.7094 | 0.7947 |
| 0.1963 | 7.03 | 2600 | 0.5412 | 0.6953 | 0.8033 | 0.8755 | 0.9257 | 0.8008 | 0.8806 | 0.4512 | 0.8876 | 0.8389 | 0.8382 | 0.8524 | 0.6578 | 0.7383 | 0.3740 | 0.7959 | 0.6813 | 0.7676 |
| 0.2533 | 7.08 | 2620 | 0.5306 | 0.6941 | 0.8020 | 0.8749 | 0.9250 | 0.8002 | 0.8596 | 0.4599 | 0.8929 | 0.8457 | 0.8310 | 0.8526 | 0.6567 | 0.7543 | 0.3649 | 0.7948 | 0.6773 | 0.7584 |
| 0.5541 | 7.14 | 2640 | 0.4998 | 0.6890 | 0.8018 | 0.8720 | 0.9232 | 0.7654 | 0.8556 | 0.5372 | 0.8919 | 0.7927 | 0.8463 | 0.8573 | 0.6352 | 0.7635 | 0.3538 | 0.7833 | 0.6576 | 0.7724 |
| 0.0754 | 7.19 | 2660 | 0.5106 | 0.6958 | 0.8093 | 0.8759 | 0.9283 | 0.8220 | 0.8579 | 0.4752 | 0.8801 | 0.8556 | 0.8459 | 0.8570 | 0.6494 | 0.7530 | 0.3634 | 0.7918 | 0.6893 | 0.7669 |
| 0.2814 | 7.24 | 2680 | 0.4975 | 0.6984 | 0.8098 | 0.8768 | 0.9245 | 0.8033 | 0.8694 | 0.4664 | 0.8791 | 0.8741 | 0.8522 | 0.8568 | 0.6600 | 0.7555 | 0.3510 | 0.7891 | 0.6977 | 0.7788 |
| 0.2554 | 7.3 | 2700 | 0.5054 | 0.7003 | 0.8036 | 0.8779 | 0.9225 | 0.8051 | 0.8610 | 0.4625 | 0.9101 | 0.8438 | 0.8204 | 0.8561 | 0.6545 | 0.7509 | 0.3953 | 0.7986 | 0.6869 | 0.7602 |
| 0.3146 | 7.35 | 2720 | 0.5127 | 0.6946 | 0.7991 | 0.8764 | 0.9263 | 0.7836 | 0.8618 | 0.4574 | 0.9023 | 0.8321 | 0.8299 | 0.8564 | 0.6548 | 0.7525 | 0.3664 | 0.7994 | 0.6725 | 0.7600 |
| 0.2087 | 7.41 | 2740 | 0.5040 | 0.6965 | 0.8015 | 0.8755 | 0.9326 | 0.7905 | 0.8362 | 0.4804 | 0.8916 | 0.8476 | 0.8315 | 0.8541 | 0.6544 | 0.7533 | 0.3868 | 0.7943 | 0.6721 | 0.7607 |
| 0.1496 | 7.46 | 2760 | 0.5244 | 0.6893 | 0.7867 | 0.8743 | 0.9272 | 0.7775 | 0.8650 | 0.4209 | 0.9172 | 0.7854 | 0.8139 | 0.8556 | 0.6555 | 0.7573 | 0.3754 | 0.7983 | 0.6397 | 0.7434 |
| 0.3823 | 7.51 | 2780 | 0.5747 | 0.6768 | 0.7897 | 0.8643 | 0.9148 | 0.7911 | 0.8751 | 0.4695 | 0.9054 | 0.8193 | 0.7525 | 0.8540 | 0.6638 | 0.7402 | 0.3878 | 0.7889 | 0.6162 | 0.6866 |
| 0.2446 | 7.57 | 2800 | 0.6275 | 0.6678 | 0.7875 | 0.8619 | 0.9218 | 0.7070 | 0.8815 | 0.5210 | 0.8876 | 0.8579 | 0.7355 | 0.8564 | 0.6162 | 0.7259 | 0.3794 | 0.7888 | 0.6316 | 0.6763 |
| 0.2835 | 7.62 | 2820 | 0.5051 | 0.6959 | 0.8041 | 0.8762 | 0.9263 | 0.8025 | 0.8453 | 0.5165 | 0.9079 | 0.7984 | 0.8319 | 0.8567 | 0.6381 | 0.7513 | 0.3942 | 0.7988 | 0.6717 | 0.7606 |
| 0.2461 | 7.68 | 2840 | 0.4727 | 0.7052 | 0.8061 | 0.8823 | 0.9331 | 0.7872 | 0.8752 | 0.4974 | 0.9138 | 0.8080 | 0.8281 | 0.8625 | 0.6577 | 0.7555 | 0.3805 | 0.8062 | 0.6868 | 0.7876 |
| 1.5004 | 7.73 | 2860 | 0.4791 | 0.7056 | 0.8100 | 0.8830 | 0.9410 | 0.8027 | 0.8831 | 0.4808 | 0.8950 | 0.8375 | 0.8298 | 0.8652 | 0.6591 | 0.7512 | 0.3775 | 0.8080 | 0.6986 | 0.7794 |
| 0.1986 | 7.78 | 2880 | 0.5078 | 0.6965 | 0.7987 | 0.8790 | 0.9429 | 0.7843 | 0.8732 | 0.4093 | 0.8797 | 0.8623 | 0.8391 | 0.8571 | 0.6569 | 0.7405 | 0.3438 | 0.7997 | 0.6984 | 0.7788 |
| 0.2339 | 7.84 | 2900 | 0.5000 | 0.7044 | 0.8077 | 0.8810 | 0.9319 | 0.7377 | 0.8550 | 0.5302 | 0.9025 | 0.8697 | 0.8269 | 0.8614 | 0.6325 | 0.7542 | 0.4047 | 0.8033 | 0.6984 | 0.7766 |
| 0.0837 | 7.89 | 2920 | 0.5007 | 0.6982 | 0.8144 | 0.8763 | 0.9253 | 0.7582 | 0.8479 | 0.6030 | 0.8950 | 0.8469 | 0.8248 | 0.8550 | 0.6169 | 0.7484 | 0.4139 | 0.8027 | 0.6928 | 0.7577 |
| 1.3492 | 7.95 | 2940 | 0.5037 | 0.7013 | 0.8103 | 0.8780 | 0.9140 | 0.7773 | 0.8419 | 0.5456 | 0.9124 | 0.8252 | 0.8556 | 0.8571 | 0.6288 | 0.7564 | 0.4051 | 0.7975 | 0.6892 | 0.7750 |
| 0.1609 | 8.0 | 2960 | 0.5388 | 0.6951 | 0.8070 | 0.8771 | 0.9309 | 0.7180 | 0.8600 | 0.5458 | 0.8846 | 0.8751 | 0.8346 | 0.8616 | 0.6048 | 0.7611 | 0.3913 | 0.7982 | 0.6846 | 0.7640 |
| 0.1974 | 8.05 | 2980 | 0.5616 | 0.6876 | 0.7942 | 0.8735 | 0.9369 | 0.7196 | 0.8774 | 0.4704 | 0.8778 | 0.8559 | 0.8217 | 0.8551 | 0.6205 | 0.7453 | 0.3603 | 0.7912 | 0.6828 | 0.7575 |
| 0.3758 | 8.11 | 3000 | 0.5371 | 0.6905 | 0.7959 | 0.8751 | 0.9300 | 0.7476 | 0.8519 | 0.4906 | 0.9060 | 0.8422 | 0.8033 | 0.8609 | 0.6354 | 0.7528 | 0.3878 | 0.8035 | 0.6576 | 0.7357 |
| 0.3258 | 8.16 | 3020 | 0.5377 | 0.7041 | 0.8142 | 0.8800 | 0.9247 | 0.8074 | 0.8684 | 0.4980 | 0.8955 | 0.8712 | 0.8341 | 0.8620 | 0.6547 | 0.7613 | 0.4021 | 0.8047 | 0.6799 | 0.7639 |
| 0.6562 | 8.22 | 3040 | 0.5174 | 0.6983 | 0.8014 | 0.8770 | 0.9314 | 0.7726 | 0.8735 | 0.5109 | 0.9061 | 0.7972 | 0.8180 | 0.8600 | 0.6557 | 0.7687 | 0.3960 | 0.7985 | 0.6545 | 0.7546 |
| 0.1986 | 8.27 | 3060 | 0.4938 | 0.6974 | 0.8105 | 0.8764 | 0.9298 | 0.8153 | 0.8806 | 0.5178 | 0.8897 | 0.8100 | 0.8299 | 0.8614 | 0.6635 | 0.7664 | 0.3697 | 0.7956 | 0.6648 | 0.7607 |
| 0.2218 | 8.32 | 3080 | 0.4929 | 0.6988 | 0.8076 | 0.8781 | 0.9372 | 0.8038 | 0.8684 | 0.4851 | 0.8902 | 0.8650 | 0.8035 | 0.8626 | 0.6487 | 0.7563 | 0.4011 | 0.8030 | 0.6678 | 0.7521 |
| 0.181 | 8.38 | 3100 | 0.4854 | 0.7063 | 0.8118 | 0.8812 | 0.9257 | 0.7996 | 0.8745 | 0.5019 | 0.8993 | 0.8189 | 0.8630 | 0.8637 | 0.6552 | 0.7577 | 0.4015 | 0.7980 | 0.6787 | 0.7892 |
| 0.204 | 8.43 | 3120 | 0.4932 | 0.7113 | 0.8182 | 0.8835 | 0.9217 | 0.7920 | 0.8898 | 0.5004 | 0.8925 | 0.8545 | 0.8763 | 0.8663 | 0.6615 | 0.7517 | 0.3951 | 0.7976 | 0.7067 | 0.8001 |
| 0.5453 | 8.49 | 3140 | 0.4829 | 0.7143 | 0.8229 | 0.8869 | 0.9246 | 0.8194 | 0.8835 | 0.5282 | 0.9107 | 0.8404 | 0.8534 | 0.8713 | 0.6598 | 0.7619 | 0.3889 | 0.8103 | 0.6973 | 0.8106 |
| 0.2292 | 8.54 | 3160 | 0.5237 | 0.7038 | 0.8241 | 0.8793 | 0.9210 | 0.8151 | 0.8620 | 0.5617 | 0.8857 | 0.8758 | 0.8471 | 0.8655 | 0.6544 | 0.7576 | 0.3918 | 0.8016 | 0.6845 | 0.7710 |
| 0.1385 | 8.59 | 3180 | 0.5481 | 0.6862 | 0.8014 | 0.8701 | 0.9265 | 0.7939 | 0.8747 | 0.5152 | 0.8964 | 0.8391 | 0.7637 | 0.8588 | 0.6542 | 0.7545 | 0.3946 | 0.7982 | 0.6452 | 0.6975 |
| 0.1635 | 8.65 | 3200 | 0.5343 | 0.6881 | 0.7996 | 0.8714 | 0.9318 | 0.8101 | 0.8753 | 0.4698 | 0.8906 | 0.8485 | 0.7714 | 0.8524 | 0.6505 | 0.7575 | 0.3919 | 0.7995 | 0.6584 | 0.7066 |
| 0.4739 | 8.7 | 3220 | 0.5714 | 0.6824 | 0.7937 | 0.8679 | 0.9211 | 0.7951 | 0.8765 | 0.4737 | 0.9050 | 0.8307 | 0.7536 | 0.8483 | 0.6485 | 0.7458 | 0.3918 | 0.7968 | 0.6535 | 0.6920 |
| 0.0793 | 8.76 | 3240 | 0.5943 | 0.6928 | 0.8056 | 0.8730 | 0.9282 | 0.8230 | 0.8639 | 0.4809 | 0.8898 | 0.8667 | 0.7871 | 0.8574 | 0.6578 | 0.7553 | 0.4043 | 0.7971 | 0.6536 | 0.7243 |
| 0.457 | 8.81 | 3260 | 0.5463 | 0.6962 | 0.8103 | 0.8747 | 0.9296 | 0.8083 | 0.8663 | 0.5222 | 0.8885 | 0.8611 | 0.7962 | 0.8615 | 0.6589 | 0.7576 | 0.4144 | 0.8003 | 0.6528 | 0.7283 |
| 0.1746 | 8.86 | 3280 | 0.5066 | 0.7015 | 0.8137 | 0.8776 | 0.9288 | 0.8112 | 0.8883 | 0.5120 | 0.8900 | 0.8623 | 0.8032 | 0.8597 | 0.6556 | 0.7580 | 0.4177 | 0.8038 | 0.6769 | 0.7386 |
| 0.1672 | 8.92 | 3300 | 0.4937 | 0.7096 | 0.8195 | 0.8820 | 0.9303 | 0.8124 | 0.8993 | 0.5266 | 0.8841 | 0.8154 | 0.8682 | 0.8624 | 0.6627 | 0.7619 | 0.3900 | 0.7968 | 0.6986 | 0.7950 |
| 0.2551 | 8.97 | 3320 | 0.4960 | 0.7167 | 0.8159 | 0.8871 | 0.9345 | 0.7941 | 0.8738 | 0.4865 | 0.8973 | 0.8495 | 0.8758 | 0.8699 | 0.6624 | 0.7631 | 0.4167 | 0.8062 | 0.7013 | 0.7974 |
| 0.2257 | 9.03 | 3340 | 0.4759 | 0.7055 | 0.8053 | 0.8823 | 0.9389 | 0.7972 | 0.8856 | 0.4653 | 0.9050 | 0.8305 | 0.8146 | 0.8699 | 0.6631 | 0.7645 | 0.4127 | 0.8071 | 0.6584 | 0.7628 |
| 0.1426 | 9.08 | 3360 | 0.5225 | 0.7002 | 0.8077 | 0.8786 | 0.9387 | 0.8276 | 0.8668 | 0.4760 | 0.9021 | 0.8812 | 0.7617 | 0.8689 | 0.6713 | 0.7618 | 0.4178 | 0.8108 | 0.6443 | 0.7265 |
| 0.3053 | 9.14 | 3380 | 0.5660 | 0.6966 | 0.8077 | 0.8762 | 0.9376 | 0.8388 | 0.8694 | 0.5149 | 0.9038 | 0.8209 | 0.7682 | 0.8671 | 0.6636 | 0.7515 | 0.4205 | 0.8039 | 0.6474 | 0.7219 |
| 0.1548 | 9.19 | 3400 | 0.5813 | 0.6893 | 0.8127 | 0.8702 | 0.9234 | 0.8438 | 0.8660 | 0.5554 | 0.8943 | 0.8522 | 0.7538 | 0.8627 | 0.6629 | 0.7515 | 0.4097 | 0.7951 | 0.6362 | 0.7066 |
| 0.1211 | 9.24 | 3420 | 0.5769 | 0.6932 | 0.8118 | 0.8727 | 0.9283 | 0.8104 | 0.8924 | 0.5413 | 0.8906 | 0.8706 | 0.7492 | 0.8634 | 0.6655 | 0.7570 | 0.4123 | 0.7995 | 0.6412 | 0.7131 |
| 0.3147 | 9.3 | 3440 | 0.5796 | 0.6837 | 0.7944 | 0.8694 | 0.9430 | 0.8004 | 0.8910 | 0.4942 | 0.8914 | 0.7999 | 0.7407 | 0.8596 | 0.6598 | 0.7527 | 0.3965 | 0.7944 | 0.6143 | 0.7083 |
| 0.1144 | 9.35 | 3460 | 0.5141 | 0.7011 | 0.8072 | 0.8787 | 0.9328 | 0.8200 | 0.8415 | 0.5128 | 0.9064 | 0.8054 | 0.8311 | 0.8622 | 0.6524 | 0.7557 | 0.4135 | 0.8022 | 0.6547 | 0.7668 |
| 0.1403 | 9.41 | 3480 | 0.5173 | 0.6984 | 0.8031 | 0.8770 | 0.9355 | 0.7785 | 0.8427 | 0.5184 | 0.8989 | 0.8227 | 0.8251 | 0.8596 | 0.6557 | 0.7544 | 0.3990 | 0.7994 | 0.6599 | 0.7611 |
| 0.203 | 9.46 | 3500 | 0.5448 | 0.6956 | 0.8058 | 0.8747 | 0.9276 | 0.8036 | 0.8666 | 0.5026 | 0.8981 | 0.8497 | 0.7924 | 0.8585 | 0.6576 | 0.7538 | 0.4119 | 0.8012 | 0.6598 | 0.7268 |
| 0.1282 | 9.51 | 3520 | 0.5381 | 0.6883 | 0.7928 | 0.8737 | 0.9430 | 0.7925 | 0.8682 | 0.4397 | 0.8971 | 0.8504 | 0.7585 | 0.8612 | 0.6569 | 0.7461 | 0.3839 | 0.8005 | 0.6522 | 0.7175 |
| 0.432 | 9.57 | 3540 | 0.5318 | 0.6937 | 0.8023 | 0.8754 | 0.9268 | 0.8147 | 0.8637 | 0.4540 | 0.8954 | 0.8362 | 0.8253 | 0.8571 | 0.6500 | 0.7432 | 0.3879 | 0.7980 | 0.6640 | 0.7557 |
| 0.2516 | 9.62 | 3560 | 0.5506 | 0.6840 | 0.7978 | 0.8681 | 0.9232 | 0.8109 | 0.8477 | 0.5013 | 0.8985 | 0.8223 | 0.7809 | 0.8549 | 0.6578 | 0.7348 | 0.3866 | 0.7868 | 0.6453 | 0.7217 |
| 0.4764 | 9.68 | 3580 | 0.5261 | 0.7026 | 0.8114 | 0.8799 | 0.9304 | 0.8107 | 0.8580 | 0.5347 | 0.9048 | 0.7993 | 0.8418 | 0.8653 | 0.6570 | 0.7500 | 0.3826 | 0.7991 | 0.6800 | 0.7842 |
| 0.2307 | 9.73 | 3600 | 0.5296 | 0.7008 | 0.8118 | 0.8791 | 0.9377 | 0.8005 | 0.8662 | 0.5038 | 0.8850 | 0.8763 | 0.8130 | 0.8678 | 0.6570 | 0.7503 | 0.4018 | 0.8025 | 0.6632 | 0.7633 |
| 0.1737 | 9.78 | 3620 | 0.5454 | 0.7018 | 0.8109 | 0.8783 | 0.9224 | 0.7941 | 0.8902 | 0.4868 | 0.8924 | 0.8574 | 0.8328 | 0.8626 | 0.6616 | 0.7485 | 0.4094 | 0.8002 | 0.6707 | 0.7595 |
| 0.1987 | 9.84 | 3640 | 0.5651 | 0.6953 | 0.7997 | 0.8760 | 0.9312 | 0.7764 | 0.8770 | 0.4707 | 0.9034 | 0.8563 | 0.7825 | 0.8672 | 0.6606 | 0.7450 | 0.4049 | 0.7976 | 0.6536 | 0.7379 |
| 0.2626 | 9.89 | 3660 | 0.5389 | 0.7021 | 0.8095 | 0.8786 | 0.9281 | 0.8088 | 0.8662 | 0.4890 | 0.8949 | 0.8447 | 0.8350 | 0.8625 | 0.6628 | 0.7507 | 0.4134 | 0.8023 | 0.6652 | 0.7577 |
| 0.228 | 9.95 | 3680 | 0.4990 | 0.7072 | 0.8153 | 0.8809 | 0.9265 | 0.8167 | 0.8577 | 0.5368 | 0.9022 | 0.8071 | 0.8602 | 0.8640 | 0.6599 | 0.7581 | 0.4045 | 0.7984 | 0.6771 | 0.7882 |
| 0.202 | 10.0 | 3700 | 0.5157 | 0.7034 | 0.8153 | 0.8797 | 0.9390 | 0.8201 | 0.8618 | 0.5079 | 0.8794 | 0.8709 | 0.8278 | 0.8647 | 0.6624 | 0.7616 | 0.3828 | 0.7975 | 0.6802 | 0.7747 |
| 0.1267 | 10.05 | 3720 | 0.6238 | 0.6908 | 0.7911 | 0.8720 | 0.9357 | 0.7617 | 0.8816 | 0.4644 | 0.9074 | 0.8577 | 0.7292 | 0.8644 | 0.6666 | 0.7559 | 0.3939 | 0.7830 | 0.6742 | 0.6974 |
| 0.201 | 10.11 | 3740 | 0.5198 | 0.7118 | 0.8110 | 0.8851 | 0.9465 | 0.7879 | 0.8955 | 0.4572 | 0.8783 | 0.8490 | 0.8626 | 0.8647 | 0.6697 | 0.7411 | 0.3811 | 0.8006 | 0.7150 | 0.8104 |
| 0.1652 | 10.16 | 3760 | 0.4975 | 0.7185 | 0.8179 | 0.8882 | 0.9393 | 0.8016 | 0.8422 | 0.5026 | 0.9001 | 0.8738 | 0.8657 | 0.8708 | 0.6646 | 0.7568 | 0.4102 | 0.8077 | 0.7127 | 0.8068 |
| 0.2556 | 10.22 | 3780 | 0.5185 | 0.7079 | 0.8096 | 0.8823 | 0.9310 | 0.7939 | 0.8767 | 0.4982 | 0.9101 | 0.8376 | 0.8200 | 0.8702 | 0.6698 | 0.7605 | 0.4202 | 0.8064 | 0.6650 | 0.7630 |
| 0.1816 | 10.27 | 3800 | 0.5099 | 0.7049 | 0.8087 | 0.8808 | 0.9357 | 0.8126 | 0.8794 | 0.5079 | 0.9117 | 0.8190 | 0.7943 | 0.8725 | 0.6798 | 0.7567 | 0.4192 | 0.8061 | 0.6405 | 0.7595 |
| 0.1798 | 10.32 | 3820 | 0.5190 | 0.6953 | 0.7978 | 0.8763 | 0.9429 | 0.7971 | 0.8612 | 0.4771 | 0.9003 | 0.8153 | 0.7909 | 0.8689 | 0.6755 | 0.7517 | 0.3761 | 0.7908 | 0.6493 | 0.7546 |
| 0.2387 | 10.38 | 3840 | 0.5195 | 0.7007 | 0.8070 | 0.8796 | 0.9452 | 0.8050 | 0.8572 | 0.5082 | 0.8902 | 0.8114 | 0.8315 | 0.8690 | 0.6623 | 0.7578 | 0.3843 | 0.7979 | 0.6553 | 0.7786 |
| 0.1978 | 10.43 | 3860 | 0.5542 | 0.6970 | 0.8079 | 0.8778 | 0.9321 | 0.8004 | 0.8992 | 0.4805 | 0.8880 | 0.8324 | 0.8223 | 0.8678 | 0.6601 | 0.7516 | 0.3669 | 0.7947 | 0.6669 | 0.7709 |
| 1.1434 | 10.49 | 3880 | 0.5309 | 0.6989 | 0.8092 | 0.8791 | 0.9322 | 0.8335 | 0.8746 | 0.4651 | 0.8949 | 0.8429 | 0.8215 | 0.8673 | 0.6571 | 0.7474 | 0.3863 | 0.8007 | 0.6581 | 0.7754 |
| 0.1303 | 10.54 | 3900 | 0.5046 | 0.7093 | 0.8109 | 0.8836 | 0.9322 | 0.8073 | 0.8619 | 0.4903 | 0.9087 | 0.8338 | 0.8418 | 0.8694 | 0.6775 | 0.7593 | 0.3962 | 0.8053 | 0.6635 | 0.7943 |
| 0.1852 | 10.59 | 3920 | 0.5232 | 0.7062 | 0.8088 | 0.8817 | 0.9322 | 0.8051 | 0.8746 | 0.4951 | 0.9094 | 0.8222 | 0.8227 | 0.8691 | 0.6798 | 0.7631 | 0.3979 | 0.8042 | 0.6553 | 0.7743 |
| 0.163 | 10.65 | 3940 | 0.5616 | 0.6985 | 0.8148 | 0.8759 | 0.9250 | 0.8139 | 0.8405 | 0.5471 | 0.8947 | 0.8755 | 0.8069 | 0.8628 | 0.6576 | 0.7535 | 0.4040 | 0.7985 | 0.6562 | 0.7571 |
| 0.1246 | 10.7 | 3960 | 0.5562 | 0.6947 | 0.8048 | 0.8743 | 0.9323 | 0.8112 | 0.8646 | 0.5051 | 0.9011 | 0.8549 | 0.7646 | 0.8614 | 0.6642 | 0.7561 | 0.4115 | 0.8013 | 0.6496 | 0.7191 |
| 0.1551 | 10.76 | 3980 | 0.5385 | 0.7000 | 0.8061 | 0.8782 | 0.9366 | 0.8124 | 0.8729 | 0.4912 | 0.8993 | 0.8277 | 0.8029 | 0.8658 | 0.6646 | 0.7624 | 0.4025 | 0.8009 | 0.6574 | 0.7464 |
| 0.1331 | 10.81 | 4000 | 0.5253 | 0.6981 | 0.8251 | 0.8747 | 0.9290 | 0.8215 | 0.8772 | 0.6042 | 0.8728 | 0.8741 | 0.7972 | 0.8673 | 0.6590 | 0.7676 | 0.3934 | 0.7926 | 0.6708 | 0.7360 |
| 0.164 | 10.86 | 4020 | 0.5408 | 0.6896 | 0.7965 | 0.8722 | 0.9387 | 0.8077 | 0.8678 | 0.4792 | 0.9052 | 0.8495 | 0.7271 | 0.8630 | 0.6566 | 0.7652 | 0.3907 | 0.7904 | 0.6769 | 0.6844 |
| 0.2992 | 10.92 | 4040 | 0.4891 | 0.7085 | 0.8211 | 0.8822 | 0.9322 | 0.8341 | 0.8662 | 0.5303 | 0.8922 | 0.8592 | 0.8337 | 0.8665 | 0.6560 | 0.7673 | 0.3975 | 0.8026 | 0.6946 | 0.7751 |
| 0.5583 | 10.97 | 4060 | 0.5178 | 0.7032 | 0.8113 | 0.8791 | 0.9242 | 0.8140 | 0.8637 | 0.5165 | 0.9088 | 0.8310 | 0.8207 | 0.8600 | 0.6629 | 0.7628 | 0.3986 | 0.8043 | 0.6801 | 0.7539 |
| 0.2979 | 11.03 | 4080 | 0.5387 | 0.7078 | 0.8218 | 0.8806 | 0.9227 | 0.8195 | 0.8675 | 0.5477 | 0.8942 | 0.8591 | 0.8423 | 0.8631 | 0.6687 | 0.7587 | 0.3998 | 0.8031 | 0.6887 | 0.7723 |
| 0.1316 | 11.08 | 4100 | 0.5345 | 0.7043 | 0.8156 | 0.8810 | 0.9374 | 0.8271 | 0.8597 | 0.5116 | 0.8893 | 0.8535 | 0.8303 | 0.8655 | 0.6618 | 0.7547 | 0.3989 | 0.8088 | 0.6779 | 0.7628 |
| 0.0826 | 11.14 | 4120 | 0.5548 | 0.7032 | 0.8100 | 0.8807 | 0.9285 | 0.8053 | 0.8655 | 0.4684 | 0.8945 | 0.8613 | 0.8462 | 0.8631 | 0.6734 | 0.7537 | 0.3760 | 0.8062 | 0.6754 | 0.7744 |
| 0.2202 | 11.19 | 4140 | 0.5560 | 0.6969 | 0.8014 | 0.8780 | 0.9305 | 0.7836 | 0.8645 | 0.4524 | 0.8979 | 0.8550 | 0.8258 | 0.8633 | 0.6689 | 0.7484 | 0.3774 | 0.8052 | 0.6585 | 0.7567 |
| 0.1173 | 11.24 | 4160 | 0.5449 | 0.6980 | 0.8126 | 0.8764 | 0.9357 | 0.7982 | 0.8568 | 0.5453 | 0.8808 | 0.8448 | 0.8266 | 0.8590 | 0.6650 | 0.7564 | 0.3648 | 0.8001 | 0.6832 | 0.7575 |
| 0.2042 | 11.3 | 4180 | 0.5187 | 0.7145 | 0.8163 | 0.8859 | 0.9255 | 0.7955 | 0.8716 | 0.4897 | 0.9028 | 0.8533 | 0.8760 | 0.8640 | 0.6608 | 0.7610 | 0.3982 | 0.8069 | 0.7113 | 0.7992 |
| 0.3217 | 11.35 | 4200 | 0.5622 | 0.7025 | 0.8106 | 0.8797 | 0.9297 | 0.7967 | 0.8600 | 0.5008 | 0.8966 | 0.8600 | 0.8306 | 0.8646 | 0.6606 | 0.7551 | 0.3964 | 0.8045 | 0.6758 | 0.7605 |
| 0.1931 | 11.41 | 4220 | 0.5558 | 0.7066 | 0.8141 | 0.8818 | 0.9319 | 0.8162 | 0.8842 | 0.4977 | 0.8970 | 0.8410 | 0.8305 | 0.8651 | 0.6655 | 0.7627 | 0.4046 | 0.8084 | 0.6798 | 0.7601 |
| 0.223 | 11.46 | 4240 | 0.5849 | 0.6760 | 0.7929 | 0.8630 | 0.9468 | 0.8051 | 0.8890 | 0.5197 | 0.8878 | 0.8614 | 0.6407 | 0.8646 | 0.6706 | 0.7622 | 0.4149 | 0.7891 | 0.6286 | 0.6018 |
| 0.3508 | 11.51 | 4260 | 0.5701 | 0.6909 | 0.8025 | 0.8698 | 0.9422 | 0.8224 | 0.8727 | 0.4983 | 0.8817 | 0.8765 | 0.7240 | 0.8629 | 0.6726 | 0.7613 | 0.4103 | 0.7803 | 0.6710 | 0.6782 |
| 0.1145 | 11.57 | 4280 | 0.5226 | 0.7036 | 0.8121 | 0.8782 | 0.9378 | 0.7925 | 0.8828 | 0.5170 | 0.8844 | 0.8744 | 0.7958 | 0.8613 | 0.6645 | 0.7536 | 0.3927 | 0.7948 | 0.7167 | 0.7411 |
| 0.3087 | 11.62 | 4300 | 0.5339 | 0.7019 | 0.8148 | 0.8772 | 0.9450 | 0.8022 | 0.8851 | 0.5208 | 0.8639 | 0.8785 | 0.8079 | 0.8580 | 0.6705 | 0.7572 | 0.3803 | 0.7963 | 0.7067 | 0.7441 |
| 0.1138 | 11.68 | 4320 | 0.5449 | 0.6969 | 0.8052 | 0.8741 | 0.9323 | 0.7961 | 0.8831 | 0.4897 | 0.8868 | 0.8640 | 0.7846 | 0.8520 | 0.6703 | 0.7525 | 0.3768 | 0.7932 | 0.7094 | 0.7238 |
| 0.1641 | 11.73 | 4340 | 0.5402 | 0.6999 | 0.8097 | 0.8747 | 0.9219 | 0.8045 | 0.8844 | 0.5193 | 0.8954 | 0.8346 | 0.8082 | 0.8543 | 0.6716 | 0.7546 | 0.3918 | 0.7904 | 0.6985 | 0.7380 |
| 0.2034 | 11.78 | 4360 | 0.5265 | 0.7109 | 0.8187 | 0.8813 | 0.9356 | 0.8192 | 0.8710 | 0.5384 | 0.8934 | 0.8642 | 0.8093 | 0.8606 | 0.6745 | 0.7612 | 0.4153 | 0.8029 | 0.7009 | 0.7611 |
| 0.1204 | 11.84 | 4380 | 0.5285 | 0.7083 | 0.8137 | 0.8804 | 0.9312 | 0.8182 | 0.8727 | 0.5095 | 0.9028 | 0.8634 | 0.7984 | 0.8569 | 0.6800 | 0.7551 | 0.4044 | 0.8060 | 0.7009 | 0.7549 |
| 0.2684 | 11.89 | 4400 | 0.5641 | 0.7045 | 0.8047 | 0.8788 | 0.9394 | 0.8446 | 0.8709 | 0.4669 | 0.9120 | 0.8363 | 0.7625 | 0.8480 | 0.6745 | 0.7552 | 0.4116 | 0.8117 | 0.6998 | 0.7306 |
| 0.1795 | 11.95 | 4420 | 0.5086 | 0.7092 | 0.8107 | 0.8820 | 0.9428 | 0.8323 | 0.8628 | 0.4895 | 0.9014 | 0.8426 | 0.8037 | 0.8559 | 0.6713 | 0.7506 | 0.4089 | 0.8097 | 0.7006 | 0.7676 |
| 0.1273 | 12.0 | 4440 | 0.5752 | 0.6985 | 0.8105 | 0.8744 | 0.9416 | 0.8068 | 0.8657 | 0.5308 | 0.8750 | 0.8667 | 0.7870 | 0.8485 | 0.6587 | 0.7526 | 0.3897 | 0.7958 | 0.7133 | 0.7308 |
| 0.2277 | 12.05 | 4460 | 0.5786 | 0.7005 | 0.8037 | 0.8758 | 0.9380 | 0.7862 | 0.8540 | 0.5036 | 0.8940 | 0.8676 | 0.7829 | 0.8503 | 0.6653 | 0.7464 | 0.3983 | 0.7974 | 0.7158 | 0.7299 |
| 0.2065 | 12.11 | 4480 | 0.5759 | 0.7021 | 0.8088 | 0.8768 | 0.9290 | 0.8154 | 0.8812 | 0.5009 | 0.9027 | 0.8534 | 0.7787 | 0.8523 | 0.6619 | 0.7602 | 0.4053 | 0.8003 | 0.7089 | 0.7255 |
| 0.2858 | 12.16 | 4500 | 0.5469 | 0.7040 | 0.8112 | 0.8769 | 0.9366 | 0.8173 | 0.8560 | 0.5284 | 0.8907 | 0.8453 | 0.8040 | 0.8505 | 0.6637 | 0.7556 | 0.4078 | 0.7983 | 0.7124 | 0.7399 |
| 0.1869 | 12.22 | 4520 | 0.5156 | 0.7109 | 0.8204 | 0.8819 | 0.9231 | 0.7980 | 0.8745 | 0.5247 | 0.8903 | 0.8808 | 0.8513 | 0.8637 | 0.6648 | 0.7594 | 0.4164 | 0.8009 | 0.6947 | 0.7764 |
| 0.2882 | 12.27 | 4540 | 0.5421 | 0.7132 | 0.8164 | 0.8840 | 0.9265 | 0.7913 | 0.8751 | 0.5107 | 0.8990 | 0.8471 | 0.8647 | 0.8649 | 0.6703 | 0.7584 | 0.4149 | 0.8046 | 0.6920 | 0.7875 |
| 0.1855 | 12.32 | 4560 | 0.5286 | 0.7110 | 0.8187 | 0.8825 | 0.9296 | 0.8065 | 0.8719 | 0.5405 | 0.8963 | 0.8363 | 0.8496 | 0.8648 | 0.6624 | 0.7625 | 0.4237 | 0.8042 | 0.6811 | 0.7779 |
| 0.0765 | 12.38 | 4580 | 0.5463 | 0.7092 | 0.8190 | 0.8808 | 0.9320 | 0.8074 | 0.8703 | 0.5498 | 0.8895 | 0.8471 | 0.8371 | 0.8629 | 0.6658 | 0.7656 | 0.4163 | 0.8007 | 0.6839 | 0.7691 |
| 0.1501 | 12.43 | 4600 | 0.5543 | 0.7090 | 0.8188 | 0.8810 | 0.9258 | 0.8169 | 0.8863 | 0.5025 | 0.8877 | 0.8735 | 0.8390 | 0.8625 | 0.6698 | 0.7602 | 0.4107 | 0.8006 | 0.6909 | 0.7683 |
| 0.2011 | 12.49 | 4620 | 0.5402 | 0.7070 | 0.8079 | 0.8805 | 0.9348 | 0.8114 | 0.8769 | 0.4842 | 0.9007 | 0.8217 | 0.8256 | 0.8575 | 0.6747 | 0.7604 | 0.4062 | 0.8045 | 0.6908 | 0.7547 |
| 0.2811 | 12.54 | 4640 | 0.5623 | 0.7109 | 0.8198 | 0.8818 | 0.9324 | 0.8165 | 0.8614 | 0.5483 | 0.8918 | 0.8414 | 0.8468 | 0.8635 | 0.6722 | 0.7605 | 0.4088 | 0.8010 | 0.6916 | 0.7786 |
| 0.1678 | 12.59 | 4660 | 0.5521 | 0.7139 | 0.8219 | 0.8838 | 0.9264 | 0.8094 | 0.8663 | 0.5321 | 0.8902 | 0.8496 | 0.8796 | 0.8646 | 0.6716 | 0.7621 | 0.3920 | 0.7980 | 0.7056 | 0.8031 |
| 0.4329 | 12.65 | 4680 | 0.5468 | 0.7142 | 0.8160 | 0.8847 | 0.9335 | 0.8008 | 0.8607 | 0.5045 | 0.8952 | 0.8510 | 0.8665 | 0.8646 | 0.6709 | 0.7608 | 0.4045 | 0.8032 | 0.7002 | 0.7948 |
| 0.1534 | 12.7 | 4700 | 0.5267 | 0.7164 | 0.8163 | 0.8861 | 0.9338 | 0.7933 | 0.8596 | 0.5049 | 0.8984 | 0.8540 | 0.8706 | 0.8655 | 0.6674 | 0.7596 | 0.4144 | 0.8073 | 0.7055 | 0.7948 |
| 0.2642 | 12.76 | 4720 | 0.5106 | 0.7123 | 0.8187 | 0.8840 | 0.9309 | 0.7976 | 0.8623 | 0.5285 | 0.8956 | 0.8617 | 0.8544 | 0.8660 | 0.6639 | 0.7622 | 0.4075 | 0.8060 | 0.6961 | 0.7842 |
| 0.1096 | 12.81 | 4740 | 0.5154 | 0.7123 | 0.8249 | 0.8833 | 0.9280 | 0.8116 | 0.8661 | 0.5616 | 0.8884 | 0.8482 | 0.8702 | 0.8654 | 0.6624 | 0.7613 | 0.3998 | 0.8021 | 0.6971 | 0.7983 |
| 0.1454 | 12.86 | 4760 | 0.5147 | 0.7008 | 0.8098 | 0.8793 | 0.9414 | 0.7752 | 0.8858 | 0.5114 | 0.8818 | 0.8572 | 0.8157 | 0.8708 | 0.6553 | 0.7575 | 0.4072 | 0.8022 | 0.6539 | 0.7587 |
| 0.2267 | 12.92 | 4780 | 0.5824 | 0.6934 | 0.8007 | 0.8725 | 0.9436 | 0.8237 | 0.8761 | 0.4947 | 0.8987 | 0.8511 | 0.7170 | 0.8607 | 0.6729 | 0.7670 | 0.4157 | 0.7944 | 0.6573 | 0.6861 |
| 0.1403 | 12.97 | 4800 | 0.5346 | 0.6917 | 0.8006 | 0.8732 | 0.9364 | 0.8246 | 0.8715 | 0.4715 | 0.9050 | 0.8746 | 0.7207 | 0.8641 | 0.6758 | 0.7634 | 0.4065 | 0.8034 | 0.6448 | 0.6841 |
| 0.1457 | 13.03 | 4820 | 0.5468 | 0.6870 | 0.7989 | 0.8712 | 0.9394 | 0.8268 | 0.8813 | 0.4657 | 0.8959 | 0.8684 | 0.7148 | 0.8580 | 0.6738 | 0.7553 | 0.3897 | 0.8040 | 0.6482 | 0.6803 |
| 0.1959 | 13.08 | 4840 | 0.5889 | 0.6759 | 0.7947 | 0.8621 | 0.9351 | 0.8170 | 0.8737 | 0.5248 | 0.8975 | 0.8713 | 0.6434 | 0.8585 | 0.6715 | 0.7555 | 0.4070 | 0.7859 | 0.6381 | 0.6145 |
| 0.1888 | 13.14 | 4860 | 0.5391 | 0.6928 | 0.8006 | 0.8722 | 0.9397 | 0.8170 | 0.8741 | 0.4950 | 0.8998 | 0.8562 | 0.7222 | 0.8578 | 0.6780 | 0.7546 | 0.4108 | 0.7973 | 0.6634 | 0.6878 |
| 0.1244 | 13.19 | 4880 | 0.5000 | 0.7118 | 0.8144 | 0.8825 | 0.9449 | 0.8240 | 0.8686 | 0.5138 | 0.8952 | 0.8501 | 0.8041 | 0.8595 | 0.6834 | 0.7673 | 0.4042 | 0.8061 | 0.6907 | 0.7714 |
| 0.3245 | 13.24 | 4900 | 0.5033 | 0.7139 | 0.8223 | 0.8849 | 0.9299 | 0.8268 | 0.8769 | 0.5158 | 0.8960 | 0.8649 | 0.8461 | 0.8697 | 0.6746 | 0.7589 | 0.4030 | 0.8043 | 0.6851 | 0.8016 |
| 0.0893 | 13.3 | 4920 | 0.4766 | 0.7166 | 0.8242 | 0.8851 | 0.9350 | 0.7987 | 0.8822 | 0.5494 | 0.8856 | 0.8620 | 0.8563 | 0.8683 | 0.6626 | 0.7724 | 0.4184 | 0.8008 | 0.7026 | 0.7915 |
| 0.2461 | 13.35 | 4940 | 0.5196 | 0.6961 | 0.8123 | 0.8744 | 0.9329 | 0.8239 | 0.8973 | 0.5374 | 0.8951 | 0.8583 | 0.7414 | 0.8647 | 0.6666 | 0.7676 | 0.4128 | 0.7991 | 0.6568 | 0.7053 |
| 0.3947 | 13.41 | 4960 | 0.5974 | 0.6791 | 0.7939 | 0.8654 | 0.9323 | 0.8005 | 0.8964 | 0.4953 | 0.9010 | 0.8603 | 0.6714 | 0.8626 | 0.6691 | 0.7586 | 0.3970 | 0.7883 | 0.6390 | 0.6388 |
| 0.1945 | 13.46 | 4980 | 0.4965 | 0.7104 | 0.8219 | 0.8826 | 0.9362 | 0.8034 | 0.8888 | 0.5460 | 0.8855 | 0.8719 | 0.8215 | 0.8665 | 0.6693 | 0.7727 | 0.3846 | 0.8019 | 0.7001 | 0.7779 |
| 0.8697 | 13.51 | 5000 | 0.4927 | 0.7097 | 0.8193 | 0.8825 | 0.9399 | 0.8246 | 0.8839 | 0.5265 | 0.8868 | 0.8474 | 0.8256 | 0.8650 | 0.6742 | 0.7690 | 0.3734 | 0.8006 | 0.7016 | 0.7839 |
| 0.0942 | 13.57 | 5020 | 0.5281 | 0.7076 | 0.8089 | 0.8813 | 0.9373 | 0.8192 | 0.8604 | 0.4780 | 0.9039 | 0.8605 | 0.8031 | 0.8639 | 0.6776 | 0.7577 | 0.3921 | 0.7985 | 0.6956 | 0.7679 |
| 0.1212 | 13.62 | 5040 | 0.5164 | 0.7183 | 0.8175 | 0.8877 | 0.9287 | 0.8103 | 0.8648 | 0.4851 | 0.9055 | 0.8453 | 0.8827 | 0.8680 | 0.6716 | 0.7601 | 0.3956 | 0.8045 | 0.7173 | 0.8110 |
| 0.3011 | 13.68 | 5060 | 0.5389 | 0.7126 | 0.8143 | 0.8840 | 0.9352 | 0.7830 | 0.8725 | 0.4934 | 0.8849 | 0.8541 | 0.8770 | 0.8611 | 0.6721 | 0.7540 | 0.3804 | 0.7997 | 0.7164 | 0.8049 |
| 0.2132 | 13.73 | 5080 | 0.5279 | 0.7108 | 0.8182 | 0.8830 | 0.9269 | 0.7763 | 0.8784 | 0.5231 | 0.8880 | 0.8677 | 0.8672 | 0.8637 | 0.6580 | 0.7550 | 0.3864 | 0.7984 | 0.7173 | 0.7966 |
| 0.2289 | 13.78 | 5100 | 0.5667 | 0.7119 | 0.8215 | 0.8828 | 0.9288 | 0.7987 | 0.8858 | 0.5180 | 0.8767 | 0.8629 | 0.8797 | 0.8621 | 0.6699 | 0.7561 | 0.3804 | 0.7964 | 0.7181 | 0.8002 |
| 0.1043 | 13.84 | 5120 | 0.5473 | 0.7141 | 0.8230 | 0.8839 | 0.9303 | 0.8010 | 0.8678 | 0.5227 | 0.8784 | 0.8807 | 0.8801 | 0.8649 | 0.6716 | 0.7646 | 0.3833 | 0.7966 | 0.7173 | 0.8007 |
| 0.2199 | 13.89 | 5140 | 0.5189 | 0.7166 | 0.8209 | 0.8856 | 0.9313 | 0.7987 | 0.8622 | 0.5373 | 0.8949 | 0.8429 | 0.8790 | 0.8648 | 0.6692 | 0.7645 | 0.3953 | 0.8028 | 0.7168 | 0.8031 |
| 0.0932 | 13.95 | 5160 | 0.5387 | 0.7148 | 0.8209 | 0.8848 | 0.9297 | 0.8056 | 0.8698 | 0.5252 | 0.8906 | 0.8443 | 0.8809 | 0.8653 | 0.6700 | 0.7620 | 0.3899 | 0.7998 | 0.7128 | 0.8036 |
| 0.1623 | 14.0 | 5180 | 0.5524 | 0.7165 | 0.8219 | 0.8856 | 0.9333 | 0.8106 | 0.8697 | 0.5145 | 0.8853 | 0.8578 | 0.8819 | 0.8653 | 0.6740 | 0.7647 | 0.3905 | 0.8010 | 0.7178 | 0.8020 |
| 0.0626 | 14.05 | 5200 | 0.5311 | 0.7172 | 0.8227 | 0.8859 | 0.9294 | 0.8223 | 0.8721 | 0.5080 | 0.8927 | 0.8637 | 0.8706 | 0.8666 | 0.6749 | 0.7642 | 0.3996 | 0.8023 | 0.7128 | 0.7999 |
| 1.7311 | 14.11 | 5220 | 0.5445 | 0.7139 | 0.8295 | 0.8830 | 0.9237 | 0.8192 | 0.8860 | 0.5533 | 0.8765 | 0.8684 | 0.8794 | 0.8651 | 0.6716 | 0.7664 | 0.3903 | 0.7962 | 0.7103 | 0.7972 |
| 0.1861 | 14.16 | 5240 | 0.5975 | 0.6976 | 0.8143 | 0.8760 | 0.9291 | 0.7984 | 0.8981 | 0.5213 | 0.8766 | 0.8598 | 0.8166 | 0.8654 | 0.6672 | 0.7612 | 0.3828 | 0.7952 | 0.6638 | 0.7477 |
| 0.2007 | 14.22 | 5260 | 0.5331 | 0.7162 | 0.8156 | 0.8866 | 0.9317 | 0.8013 | 0.8842 | 0.4812 | 0.8993 | 0.8364 | 0.8754 | 0.8666 | 0.6722 | 0.7657 | 0.3968 | 0.8055 | 0.7088 | 0.7978 |
| 0.1575 | 14.27 | 5280 | 0.5513 | 0.7118 | 0.8166 | 0.8838 | 0.9339 | 0.8226 | 0.8685 | 0.4718 | 0.8880 | 0.8788 | 0.8525 | 0.8655 | 0.6706 | 0.7593 | 0.3990 | 0.8010 | 0.7020 | 0.7853 |
| 0.1569 | 14.32 | 5300 | 0.5492 | 0.7172 | 0.8262 | 0.8861 | 0.9245 | 0.8175 | 0.8864 | 0.5106 | 0.8877 | 0.8790 | 0.8779 | 0.8671 | 0.6691 | 0.7647 | 0.3992 | 0.8027 | 0.7159 | 0.8019 |
| 0.2031 | 14.38 | 5320 | 0.5493 | 0.7185 | 0.8141 | 0.8883 | 0.9315 | 0.7878 | 0.8691 | 0.4781 | 0.9072 | 0.8440 | 0.8806 | 0.8674 | 0.6735 | 0.7644 | 0.3924 | 0.8078 | 0.7183 | 0.8057 |
| 0.103 | 14.43 | 5340 | 0.5587 | 0.7156 | 0.8207 | 0.8856 | 0.9318 | 0.7886 | 0.8804 | 0.5193 | 0.8899 | 0.8720 | 0.8627 | 0.8662 | 0.6653 | 0.7655 | 0.4027 | 0.8051 | 0.7102 | 0.7938 |
| 0.2542 | 14.49 | 5360 | 0.5641 | 0.7139 | 0.8220 | 0.8840 | 0.9326 | 0.7971 | 0.8976 | 0.5245 | 0.8824 | 0.8672 | 0.8529 | 0.8645 | 0.6695 | 0.7665 | 0.3989 | 0.8021 | 0.7136 | 0.7825 |
| 0.1785 | 14.54 | 5380 | 0.5444 | 0.7116 | 0.8231 | 0.8824 | 0.9338 | 0.8160 | 0.8885 | 0.5364 | 0.8805 | 0.8647 | 0.8420 | 0.8638 | 0.6712 | 0.7654 | 0.3904 | 0.7995 | 0.7144 | 0.7768 |
| 0.1191 | 14.59 | 5400 | 0.5167 | 0.7168 | 0.8189 | 0.8874 | 0.9391 | 0.7911 | 0.8697 | 0.5050 | 0.8919 | 0.8745 | 0.8608 | 0.8727 | 0.6679 | 0.7601 | 0.3963 | 0.8043 | 0.7122 | 0.8041 |
| 0.147 | 14.65 | 5420 | 0.5700 | 0.7025 | 0.8119 | 0.8802 | 0.9345 | 0.8138 | 0.8727 | 0.4860 | 0.8877 | 0.8510 | 0.8373 | 0.8656 | 0.6679 | 0.7542 | 0.3762 | 0.8026 | 0.6853 | 0.7656 |
| 0.3545 | 14.7 | 5440 | 0.5473 | 0.7036 | 0.8157 | 0.8802 | 0.9316 | 0.8189 | 0.8559 | 0.5025 | 0.8869 | 0.8748 | 0.8389 | 0.8662 | 0.6664 | 0.7580 | 0.3780 | 0.8012 | 0.6869 | 0.7685 |
| 0.1947 | 14.76 | 5460 | 0.5036 | 0.7158 | 0.8139 | 0.8886 | 0.9412 | 0.8081 | 0.8616 | 0.4515 | 0.8973 | 0.8696 | 0.8683 | 0.8740 | 0.6681 | 0.7505 | 0.3915 | 0.8079 | 0.7101 | 0.8085 |
| 0.1545 | 14.81 | 5480 | 0.5430 | 0.7140 | 0.8196 | 0.8850 | 0.9300 | 0.8214 | 0.8490 | 0.4726 | 0.8855 | 0.8980 | 0.8804 | 0.8668 | 0.6682 | 0.7467 | 0.4002 | 0.8001 | 0.7119 | 0.8043 |
| 0.1854 | 14.86 | 5500 | 0.5379 | 0.7159 | 0.8230 | 0.8858 | 0.9298 | 0.8248 | 0.8609 | 0.5100 | 0.8904 | 0.8643 | 0.8807 | 0.8649 | 0.6702 | 0.7501 | 0.3928 | 0.8047 | 0.7253 | 0.8033 |
| 0.7851 | 14.92 | 5520 | 0.5318 | 0.7118 | 0.8193 | 0.8842 | 0.9328 | 0.8164 | 0.8700 | 0.5075 | 0.8907 | 0.8591 | 0.8584 | 0.8675 | 0.6700 | 0.7544 | 0.3964 | 0.8049 | 0.7030 | 0.7867 |
| 0.1866 | 14.97 | 5540 | 0.5382 | 0.7025 | 0.8114 | 0.8797 | 0.9390 | 0.7935 | 0.8645 | 0.5080 | 0.8879 | 0.8801 | 0.8070 | 0.8687 | 0.6702 | 0.7570 | 0.4004 | 0.8060 | 0.6687 | 0.7467 |
| 0.2979 | 15.03 | 5560 | 0.5545 | 0.7107 | 0.8179 | 0.8830 | 0.9320 | 0.7922 | 0.8796 | 0.5252 | 0.8922 | 0.8641 | 0.8397 | 0.8642 | 0.6717 | 0.7624 | 0.3955 | 0.8067 | 0.7031 | 0.7715 |
| 0.1962 | 15.08 | 5580 | 0.5273 | 0.7225 | 0.8270 | 0.8900 | 0.9313 | 0.7884 | 0.8483 | 0.5552 | 0.8960 | 0.8643 | 0.9056 | 0.8730 | 0.6749 | 0.7600 | 0.3920 | 0.8094 | 0.7187 | 0.8298 |
| 0.1601 | 15.14 | 5600 | 0.5257 | 0.7258 | 0.8288 | 0.8917 | 0.9288 | 0.8094 | 0.8736 | 0.5365 | 0.8994 | 0.8363 | 0.9176 | 0.8714 | 0.6785 | 0.7648 | 0.3888 | 0.8113 | 0.7269 | 0.8391 |
| 0.1676 | 15.19 | 5620 | 0.5463 | 0.7204 | 0.8266 | 0.8886 | 0.9257 | 0.7966 | 0.8770 | 0.5368 | 0.8944 | 0.8520 | 0.9036 | 0.8701 | 0.6790 | 0.7626 | 0.3802 | 0.8070 | 0.7154 | 0.8284 |
| 0.0922 | 15.24 | 5640 | 0.5255 | 0.7156 | 0.8236 | 0.8861 | 0.9295 | 0.8067 | 0.8726 | 0.5232 | 0.8891 | 0.8609 | 0.8831 | 0.8684 | 0.6777 | 0.7625 | 0.3750 | 0.8051 | 0.7093 | 0.8110 |
| 0.2047 | 15.3 | 5660 | 0.5494 | 0.7112 | 0.8137 | 0.8840 | 0.9341 | 0.7945 | 0.8725 | 0.4989 | 0.8963 | 0.8446 | 0.8551 | 0.8640 | 0.6778 | 0.7621 | 0.3802 | 0.8056 | 0.7018 | 0.7872 |
| 0.111 | 15.35 | 5680 | 0.5560 | 0.7114 | 0.8163 | 0.8836 | 0.9316 | 0.7924 | 0.8645 | 0.5130 | 0.8935 | 0.8603 | 0.8587 | 0.8660 | 0.6701 | 0.7613 | 0.3943 | 0.8031 | 0.6969 | 0.7882 |
| 0.1209 | 15.41 | 5700 | 0.5347 | 0.7110 | 0.8180 | 0.8833 | 0.9330 | 0.8130 | 0.8500 | 0.5268 | 0.8949 | 0.8525 | 0.8558 | 0.8657 | 0.6675 | 0.7604 | 0.3979 | 0.8035 | 0.6961 | 0.7855 |
| 0.2232 | 15.46 | 5720 | 0.5374 | 0.7129 | 0.8212 | 0.8845 | 0.9311 | 0.8104 | 0.8556 | 0.5364 | 0.8941 | 0.8541 | 0.8664 | 0.8668 | 0.6703 | 0.7612 | 0.3832 | 0.8043 | 0.7098 | 0.7945 |
| 0.107 | 15.51 | 5740 | 0.5326 | 0.7150 | 0.8181 | 0.8858 | 0.9378 | 0.8054 | 0.8587 | 0.5058 | 0.8927 | 0.8687 | 0.8577 | 0.8662 | 0.6705 | 0.7613 | 0.4002 | 0.8081 | 0.7074 | 0.7910 |
| 0.1721 | 15.57 | 5760 | 0.5521 | 0.7180 | 0.8209 | 0.8867 | 0.9280 | 0.8192 | 0.8672 | 0.5088 | 0.9022 | 0.8469 | 0.8739 | 0.8654 | 0.6727 | 0.7622 | 0.4117 | 0.8085 | 0.7097 | 0.7957 |
| 0.07 | 15.62 | 5780 | 0.5626 | 0.7155 | 0.8151 | 0.8858 | 0.9318 | 0.7941 | 0.8799 | 0.4922 | 0.9019 | 0.8496 | 0.8564 | 0.8645 | 0.6715 | 0.7626 | 0.4140 | 0.8103 | 0.7028 | 0.7831 |
| 0.1276 | 15.68 | 5800 | 0.5635 | 0.7130 | 0.8164 | 0.8843 | 0.9299 | 0.7855 | 0.8918 | 0.4966 | 0.8929 | 0.8661 | 0.8522 | 0.8644 | 0.6693 | 0.7593 | 0.4055 | 0.8062 | 0.7064 | 0.7799 |
| 0.1052 | 15.73 | 5820 | 0.5672 | 0.7146 | 0.8165 | 0.8849 | 0.9305 | 0.8068 | 0.8792 | 0.4870 | 0.8979 | 0.8617 | 0.8522 | 0.8616 | 0.6754 | 0.7642 | 0.4041 | 0.8091 | 0.7085 | 0.7796 |
| 0.1162 | 15.78 | 5840 | 0.5444 | 0.7236 | 0.8235 | 0.8894 | 0.9312 | 0.8066 | 0.8719 | 0.5166 | 0.9017 | 0.8552 | 0.8810 | 0.8663 | 0.6741 | 0.7667 | 0.4209 | 0.8130 | 0.7225 | 0.8020 |
| 0.1426 | 15.84 | 5860 | 0.5171 | 0.7213 | 0.8194 | 0.8883 | 0.9372 | 0.7994 | 0.8742 | 0.5217 | 0.9028 | 0.8361 | 0.8643 | 0.8645 | 0.6745 | 0.7669 | 0.4170 | 0.8128 | 0.7183 | 0.7952 |
| 0.1844 | 15.89 | 5880 | 0.5518 | 0.7200 | 0.8189 | 0.8878 | 0.9326 | 0.8036 | 0.8469 | 0.5145 | 0.9060 | 0.8580 | 0.8709 | 0.8658 | 0.6754 | 0.7581 | 0.4150 | 0.8109 | 0.7170 | 0.7978 |
| 0.2945 | 15.95 | 5900 | 0.5565 | 0.7206 | 0.8200 | 0.8878 | 0.9276 | 0.7941 | 0.8755 | 0.5107 | 0.9054 | 0.8554 | 0.8714 | 0.8668 | 0.6768 | 0.7584 | 0.4143 | 0.8087 | 0.7191 | 0.8003 |
| 0.145 | 16.0 | 5920 | 0.5249 | 0.7216 | 0.8128 | 0.8894 | 0.9296 | 0.7719 | 0.8671 | 0.4800 | 0.9160 | 0.8482 | 0.8765 | 0.8674 | 0.6718 | 0.7571 | 0.4172 | 0.8110 | 0.7219 | 0.8047 |
| 0.1149 | 16.05 | 5940 | 0.5109 | 0.7248 | 0.8192 | 0.8917 | 0.9357 | 0.7882 | 0.8875 | 0.4793 | 0.9042 | 0.8622 | 0.8774 | 0.8719 | 0.6709 | 0.7587 | 0.4165 | 0.8130 | 0.7274 | 0.8153 |
| 0.2172 | 16.11 | 5960 | 0.5528 | 0.7198 | 0.8167 | 0.8887 | 0.9409 | 0.7893 | 0.8765 | 0.4869 | 0.8986 | 0.8756 | 0.8493 | 0.8699 | 0.6693 | 0.7596 | 0.4152 | 0.8100 | 0.7168 | 0.7982 |
| 0.1285 | 16.16 | 5980 | 0.5572 | 0.7186 | 0.8251 | 0.8873 | 0.9344 | 0.8110 | 0.8495 | 0.5442 | 0.8940 | 0.8703 | 0.8722 | 0.8704 | 0.6669 | 0.7590 | 0.4014 | 0.8054 | 0.7181 | 0.8086 |
| 0.5695 | 16.22 | 6000 | 0.5555 | 0.7170 | 0.8217 | 0.8864 | 0.9343 | 0.7986 | 0.8786 | 0.5190 | 0.8893 | 0.8661 | 0.8662 | 0.8676 | 0.6697 | 0.7586 | 0.3989 | 0.8037 | 0.7181 | 0.8027 |
| 0.1586 | 16.27 | 6020 | 0.5338 | 0.7082 | 0.8202 | 0.8818 | 0.9302 | 0.8060 | 0.8668 | 0.5346 | 0.8891 | 0.8778 | 0.8368 | 0.8673 | 0.6684 | 0.7633 | 0.4013 | 0.8058 | 0.6830 | 0.7686 |
| 0.0858 | 16.32 | 6040 | 0.5418 | 0.7179 | 0.8201 | 0.8874 | 0.9287 | 0.7893 | 0.8787 | 0.5203 | 0.9025 | 0.8470 | 0.8741 | 0.8666 | 0.6710 | 0.7677 | 0.3967 | 0.8107 | 0.7141 | 0.7987 |
| 0.0694 | 16.38 | 6060 | 0.6288 | 0.6944 | 0.8039 | 0.8737 | 0.9360 | 0.7864 | 0.8595 | 0.5333 | 0.8966 | 0.8448 | 0.7708 | 0.8571 | 0.6652 | 0.7609 | 0.3960 | 0.8011 | 0.6690 | 0.7116 |
| 0.1827 | 16.43 | 6080 | 0.5731 | 0.7078 | 0.8186 | 0.8819 | 0.9333 | 0.7980 | 0.8807 | 0.5254 | 0.8835 | 0.8632 | 0.8459 | 0.8653 | 0.6728 | 0.7609 | 0.3815 | 0.8048 | 0.6939 | 0.7753 |
| 0.2024 | 16.49 | 6100 | 0.5922 | 0.7009 | 0.8011 | 0.8790 | 0.9363 | 0.7873 | 0.8584 | 0.4856 | 0.9086 | 0.8194 | 0.8118 | 0.8614 | 0.6736 | 0.7591 | 0.3908 | 0.8061 | 0.6679 | 0.7474 |
| 0.0942 | 16.54 | 6120 | 0.6118 | 0.6925 | 0.7965 | 0.8734 | 0.9301 | 0.7877 | 0.8613 | 0.4843 | 0.9110 | 0.8342 | 0.7667 | 0.8607 | 0.6680 | 0.7506 | 0.3992 | 0.7971 | 0.6670 | 0.7049 |
| 0.7384 | 16.59 | 6140 | 0.5846 | 0.6992 | 0.8076 | 0.8771 | 0.9340 | 0.8140 | 0.8651 | 0.4822 | 0.8928 | 0.8693 | 0.7958 | 0.8603 | 0.6683 | 0.7503 | 0.4057 | 0.8055 | 0.6736 | 0.7303 |
| 0.1003 | 16.65 | 6160 | 0.5793 | 0.6939 | 0.8053 | 0.8734 | 0.9294 | 0.8115 | 0.8627 | 0.5136 | 0.9012 | 0.8538 | 0.7648 | 0.8618 | 0.6648 | 0.7553 | 0.4015 | 0.7961 | 0.6773 | 0.7009 |
| 0.0913 | 16.7 | 6180 | 0.5787 | 0.6959 | 0.8056 | 0.8741 | 0.9301 | 0.8184 | 0.8740 | 0.5007 | 0.9024 | 0.8561 | 0.7577 | 0.8667 | 0.6704 | 0.7628 | 0.4020 | 0.7908 | 0.6815 | 0.6972 |
| 0.2068 | 16.76 | 6200 | 0.5772 | 0.7098 | 0.8132 | 0.8812 | 0.9262 | 0.7967 | 0.8868 | 0.5004 | 0.9024 | 0.8626 | 0.8175 | 0.8672 | 0.6705 | 0.7588 | 0.4099 | 0.7945 | 0.7189 | 0.7485 |
| 0.2168 | 16.81 | 6220 | 0.5423 | 0.7131 | 0.8174 | 0.8835 | 0.9271 | 0.8112 | 0.8737 | 0.5133 | 0.9047 | 0.8578 | 0.8342 | 0.8654 | 0.6691 | 0.7563 | 0.4170 | 0.8039 | 0.7128 | 0.7674 |
| 2.61 | 16.86 | 6240 | 0.5607 | 0.7120 | 0.8119 | 0.8848 | 0.9325 | 0.8034 | 0.8904 | 0.4739 | 0.9058 | 0.8401 | 0.8370 | 0.8660 | 0.6706 | 0.7606 | 0.4059 | 0.8095 | 0.7006 | 0.7711 |
| 0.0699 | 16.92 | 6260 | 0.5302 | 0.7170 | 0.8227 | 0.8856 | 0.9317 | 0.8285 | 0.8699 | 0.5276 | 0.9004 | 0.8582 | 0.8428 | 0.8639 | 0.6700 | 0.7642 | 0.4255 | 0.8123 | 0.7100 | 0.7728 |
| 0.1728 | 16.97 | 6280 | 0.5543 | 0.7149 | 0.8196 | 0.8841 | 0.9306 | 0.8019 | 0.8793 | 0.5493 | 0.9028 | 0.8313 | 0.8419 | 0.8643 | 0.6714 | 0.7624 | 0.4212 | 0.8069 | 0.7044 | 0.7737 |
| 0.0756 | 17.03 | 6300 | 0.5593 | 0.7152 | 0.8238 | 0.8845 | 0.9300 | 0.8031 | 0.8610 | 0.5741 | 0.9001 | 0.8487 | 0.8493 | 0.8665 | 0.6692 | 0.7661 | 0.4136 | 0.8078 | 0.7034 | 0.7798 |
| 0.1633 | 17.08 | 6320 | 0.5958 | 0.7161 | 0.8222 | 0.8845 | 0.9316 | 0.7930 | 0.8729 | 0.5574 | 0.8959 | 0.8599 | 0.8446 | 0.8650 | 0.6699 | 0.7637 | 0.4271 | 0.8080 | 0.7070 | 0.7721 |
| 0.1015 | 17.14 | 6340 | 0.6030 | 0.7137 | 0.8161 | 0.8825 | 0.9302 | 0.8032 | 0.8629 | 0.5468 | 0.9068 | 0.8275 | 0.8353 | 0.8626 | 0.6709 | 0.7570 | 0.4303 | 0.8015 | 0.7070 | 0.7666 |
| 0.2928 | 17.19 | 6360 | 0.5675 | 0.7120 | 0.8159 | 0.8815 | 0.9330 | 0.7991 | 0.8788 | 0.5261 | 0.8977 | 0.8680 | 0.8083 | 0.8564 | 0.6667 | 0.7601 | 0.4344 | 0.8090 | 0.7142 | 0.7429 |
| 0.9543 | 17.24 | 6380 | 0.6129 | 0.7033 | 0.8100 | 0.8784 | 0.9339 | 0.7862 | 0.8748 | 0.5222 | 0.8977 | 0.8671 | 0.7883 | 0.8650 | 0.6657 | 0.7507 | 0.4321 | 0.8055 | 0.6784 | 0.7259 |
| 0.1715 | 17.3 | 6400 | 0.6306 | 0.7011 | 0.8094 | 0.8775 | 0.9307 | 0.7823 | 0.8702 | 0.5349 | 0.9003 | 0.8522 | 0.7950 | 0.8653 | 0.6656 | 0.7501 | 0.4185 | 0.8027 | 0.6771 | 0.7282 |
| 0.089 | 17.35 | 6420 | 0.6377 | 0.7018 | 0.8100 | 0.8777 | 0.9312 | 0.8045 | 0.8458 | 0.5439 | 0.9063 | 0.8376 | 0.8008 | 0.8650 | 0.6677 | 0.7543 | 0.4192 | 0.8031 | 0.6721 | 0.7314 |
| 0.1369 | 17.41 | 6440 | 0.6227 | 0.7024 | 0.8119 | 0.8780 | 0.9306 | 0.8143 | 0.8489 | 0.5448 | 0.9054 | 0.8375 | 0.8021 | 0.8644 | 0.6668 | 0.7544 | 0.4227 | 0.8049 | 0.6707 | 0.7328 |
| 0.2959 | 17.46 | 6460 | 0.5957 | 0.7049 | 0.8095 | 0.8788 | 0.9301 | 0.7926 | 0.8749 | 0.5265 | 0.9059 | 0.8350 | 0.8015 | 0.8633 | 0.6696 | 0.7549 | 0.4315 | 0.8039 | 0.6765 | 0.7347 |
| 0.0673 | 17.51 | 6480 | 0.5617 | 0.7096 | 0.8129 | 0.8818 | 0.9328 | 0.7848 | 0.8726 | 0.5238 | 0.9020 | 0.8588 | 0.8152 | 0.8663 | 0.6678 | 0.7582 | 0.4309 | 0.8056 | 0.6866 | 0.7519 |
| 0.1281 | 17.57 | 6500 | 0.5787 | 0.7047 | 0.8158 | 0.8775 | 0.9287 | 0.7978 | 0.8652 | 0.5508 | 0.8919 | 0.8742 | 0.8023 | 0.8567 | 0.6662 | 0.7588 | 0.4207 | 0.8024 | 0.6924 | 0.7359 |
| 0.6911 | 17.62 | 6520 | 0.5914 | 0.7092 | 0.8187 | 0.8813 | 0.9280 | 0.7934 | 0.8578 | 0.5551 | 0.8957 | 0.8608 | 0.8399 | 0.8628 | 0.6644 | 0.7599 | 0.3992 | 0.8019 | 0.7084 | 0.7679 |
| 0.1595 | 17.68 | 6540 | 0.5951 | 0.7027 | 0.8148 | 0.8784 | 0.9254 | 0.8011 | 0.8575 | 0.5508 | 0.9020 | 0.8535 | 0.8134 | 0.8633 | 0.6660 | 0.7572 | 0.4000 | 0.8046 | 0.6849 | 0.7426 |
| 0.1599 | 17.73 | 6560 | 0.5909 | 0.7014 | 0.8164 | 0.8778 | 0.9282 | 0.8115 | 0.8872 | 0.5408 | 0.8927 | 0.8541 | 0.8002 | 0.8629 | 0.6664 | 0.7665 | 0.4051 | 0.8072 | 0.6693 | 0.7324 |
| 0.1609 | 17.78 | 6580 | 0.5970 | 0.7023 | 0.8116 | 0.8792 | 0.9278 | 0.8104 | 0.8748 | 0.4994 | 0.9016 | 0.8631 | 0.8040 | 0.8643 | 0.6685 | 0.7621 | 0.4073 | 0.8103 | 0.6680 | 0.7354 |
| 0.2268 | 17.84 | 6600 | 0.5891 | 0.7019 | 0.8071 | 0.8790 | 0.9338 | 0.7934 | 0.8844 | 0.4920 | 0.9004 | 0.8478 | 0.7981 | 0.8637 | 0.6727 | 0.7583 | 0.4075 | 0.8092 | 0.6684 | 0.7338 |
| 0.3312 | 17.89 | 6620 | 0.6018 | 0.7021 | 0.8152 | 0.8781 | 0.9277 | 0.8099 | 0.8892 | 0.5217 | 0.8928 | 0.8652 | 0.7999 | 0.8640 | 0.6716 | 0.7630 | 0.4066 | 0.8064 | 0.6695 | 0.7338 |
| 0.2164 | 17.95 | 6640 | 0.5721 | 0.7131 | 0.8196 | 0.8843 | 0.9272 | 0.8000 | 0.8862 | 0.5169 | 0.8985 | 0.8654 | 0.8430 | 0.8673 | 0.6684 | 0.7671 | 0.4175 | 0.8098 | 0.6918 | 0.7695 |
| 0.2132 | 18.0 | 6660 | 0.5972 | 0.7115 | 0.8158 | 0.8836 | 0.9412 | 0.7924 | 0.8815 | 0.5187 | 0.8921 | 0.8696 | 0.8154 | 0.8624 | 0.6670 | 0.7689 | 0.4141 | 0.8147 | 0.7044 | 0.7492 |
| 0.1357 | 18.05 | 6680 | 0.5733 | 0.7108 | 0.8082 | 0.8849 | 0.9360 | 0.7686 | 0.8793 | 0.4753 | 0.9030 | 0.8510 | 0.8440 | 0.8637 | 0.6611 | 0.7649 | 0.3977 | 0.8113 | 0.7041 | 0.7730 |
| 0.1595 | 18.11 | 6700 | 0.5729 | 0.7153 | 0.8200 | 0.8859 | 0.9330 | 0.7988 | 0.8632 | 0.5240 | 0.8974 | 0.8688 | 0.8551 | 0.8666 | 0.6606 | 0.7684 | 0.4131 | 0.8104 | 0.7052 | 0.7831 |
| 0.3169 | 18.16 | 6720 | 0.5930 | 0.7149 | 0.8162 | 0.8862 | 0.9301 | 0.7785 | 0.8761 | 0.5033 | 0.9015 | 0.8655 | 0.8586 | 0.8670 | 0.6611 | 0.7626 | 0.4076 | 0.8094 | 0.7090 | 0.7875 |
| 0.143 | 18.22 | 6740 | 0.5853 | 0.7158 | 0.8170 | 0.8864 | 0.9291 | 0.7698 | 0.8828 | 0.5153 | 0.9000 | 0.8485 | 0.8731 | 0.8663 | 0.6623 | 0.7586 | 0.4031 | 0.8080 | 0.7151 | 0.7974 |
| 0.7411 | 18.27 | 6760 | 0.5810 | 0.7124 | 0.8171 | 0.8845 | 0.9276 | 0.8054 | 0.8677 | 0.5153 | 0.9056 | 0.8466 | 0.8515 | 0.8666 | 0.6692 | 0.7603 | 0.4065 | 0.8094 | 0.6960 | 0.7790 |
| 0.1648 | 18.32 | 6780 | 0.5773 | 0.7061 | 0.8120 | 0.8816 | 0.9271 | 0.7893 | 0.8766 | 0.5072 | 0.9058 | 0.8545 | 0.8238 | 0.8679 | 0.6679 | 0.7608 | 0.4047 | 0.8091 | 0.6790 | 0.7531 |
| 0.2575 | 18.38 | 6800 | 0.5514 | 0.7221 | 0.8238 | 0.8892 | 0.9317 | 0.7913 | 0.8867 | 0.5168 | 0.8949 | 0.8637 | 0.8814 | 0.8683 | 0.6709 | 0.7613 | 0.4087 | 0.8110 | 0.7296 | 0.8045 |
| 0.1961 | 18.43 | 6820 | 0.5768 | 0.7222 | 0.8244 | 0.8891 | 0.9306 | 0.7966 | 0.8749 | 0.5171 | 0.8953 | 0.8734 | 0.8827 | 0.8679 | 0.6705 | 0.7612 | 0.4152 | 0.8114 | 0.7256 | 0.8035 |
| 0.1617 | 18.49 | 6840 | 0.5591 | 0.7236 | 0.8224 | 0.8899 | 0.9328 | 0.7914 | 0.8782 | 0.5112 | 0.9008 | 0.8664 | 0.8759 | 0.8680 | 0.6708 | 0.7669 | 0.4179 | 0.8128 | 0.7251 | 0.8040 |
| 0.2819 | 18.54 | 6860 | 0.5806 | 0.7132 | 0.8183 | 0.8847 | 0.9261 | 0.7942 | 0.8849 | 0.5169 | 0.9034 | 0.8563 | 0.8465 | 0.8683 | 0.6696 | 0.7663 | 0.4157 | 0.8112 | 0.6892 | 0.7720 |
| 0.1305 | 18.59 | 6880 | 0.5868 | 0.7124 | 0.8181 | 0.8842 | 0.9320 | 0.7913 | 0.8783 | 0.5234 | 0.8966 | 0.8645 | 0.8406 | 0.8682 | 0.6677 | 0.7665 | 0.4158 | 0.8101 | 0.6899 | 0.7687 |
| 0.1813 | 18.65 | 6900 | 0.5757 | 0.7134 | 0.8165 | 0.8853 | 0.9263 | 0.7889 | 0.8840 | 0.5132 | 0.9095 | 0.8499 | 0.8432 | 0.8690 | 0.6650 | 0.7660 | 0.4176 | 0.8119 | 0.6934 | 0.7712 |
| 0.0823 | 18.7 | 6920 | 0.5687 | 0.7163 | 0.8202 | 0.8859 | 0.9305 | 0.7884 | 0.8794 | 0.5210 | 0.8960 | 0.8686 | 0.8577 | 0.8687 | 0.6690 | 0.7631 | 0.4220 | 0.8102 | 0.7019 | 0.7789 |
| 0.1101 | 18.76 | 6940 | 0.5771 | 0.7175 | 0.8233 | 0.8862 | 0.9296 | 0.8044 | 0.8818 | 0.5249 | 0.8942 | 0.8688 | 0.8595 | 0.8683 | 0.6717 | 0.7608 | 0.4202 | 0.8085 | 0.7055 | 0.7871 |
| 0.2333 | 18.81 | 6960 | 0.5566 | 0.7193 | 0.8222 | 0.8881 | 0.9359 | 0.7946 | 0.8759 | 0.5261 | 0.8964 | 0.8671 | 0.8595 | 0.8719 | 0.6701 | 0.7633 | 0.4214 | 0.8122 | 0.7076 | 0.7886 |
| 0.0968 | 18.86 | 6980 | 0.5836 | 0.7185 | 0.8233 | 0.8872 | 0.9315 | 0.8056 | 0.8643 | 0.5285 | 0.8966 | 0.8667 | 0.8697 | 0.8694 | 0.6666 | 0.7598 | 0.4158 | 0.8082 | 0.7122 | 0.7976 |
| 0.16 | 18.92 | 7000 | 0.5752 | 0.7158 | 0.8195 | 0.8862 | 0.9326 | 0.7931 | 0.8713 | 0.5191 | 0.8980 | 0.8684 | 0.8538 | 0.8695 | 0.6647 | 0.7645 | 0.4235 | 0.8113 | 0.6968 | 0.7802 |
| 0.1579 | 18.97 | 7020 | 0.6036 | 0.7103 | 0.8210 | 0.8824 | 0.9313 | 0.7935 | 0.8632 | 0.5605 | 0.8922 | 0.8669 | 0.8393 | 0.8678 | 0.6634 | 0.7607 | 0.4160 | 0.8059 | 0.6876 | 0.7707 |
| 0.1677 | 19.03 | 7040 | 0.5808 | 0.7113 | 0.8212 | 0.8829 | 0.9289 | 0.8068 | 0.8631 | 0.5528 | 0.8978 | 0.8586 | 0.8404 | 0.8669 | 0.6631 | 0.7616 | 0.4201 | 0.8072 | 0.6886 | 0.7714 |
| 0.1879 | 19.08 | 7060 | 0.5763 | 0.7170 | 0.8221 | 0.8859 | 0.9326 | 0.7825 | 0.8758 | 0.5463 | 0.8913 | 0.8593 | 0.8672 | 0.8671 | 0.6637 | 0.7588 | 0.4211 | 0.8066 | 0.7085 | 0.7934 |
| 0.1272 | 19.14 | 7080 | 0.5822 | 0.7121 | 0.8200 | 0.8833 | 0.9302 | 0.7935 | 0.8770 | 0.5450 | 0.8960 | 0.8608 | 0.8378 | 0.8673 | 0.6653 | 0.7628 | 0.4268 | 0.8087 | 0.6860 | 0.7678 |
| 0.188 | 19.19 | 7100 | 0.6020 | 0.7103 | 0.8159 | 0.8828 | 0.9306 | 0.8042 | 0.8661 | 0.5175 | 0.9005 | 0.8577 | 0.8350 | 0.8663 | 0.6672 | 0.7600 | 0.4219 | 0.8086 | 0.6823 | 0.7655 |
| 0.1206 | 19.24 | 7120 | 0.5927 | 0.7106 | 0.8173 | 0.8827 | 0.9365 | 0.8142 | 0.8713 | 0.5191 | 0.8931 | 0.8589 | 0.8277 | 0.8639 | 0.6675 | 0.7655 | 0.4221 | 0.8103 | 0.6845 | 0.7603 |
| 0.1531 | 19.3 | 7140 | 0.6357 | 0.7089 | 0.8169 | 0.8820 | 0.9312 | 0.8196 | 0.8886 | 0.5226 | 0.9018 | 0.8407 | 0.8141 | 0.8650 | 0.6661 | 0.7690 | 0.4261 | 0.8122 | 0.6761 | 0.7475 |
| 0.0871 | 19.35 | 7160 | 0.6291 | 0.7126 | 0.8197 | 0.8837 | 0.9291 | 0.8144 | 0.8843 | 0.5163 | 0.8985 | 0.8630 | 0.8323 | 0.8669 | 0.6698 | 0.7668 | 0.4257 | 0.8107 | 0.6845 | 0.7636 |
| 0.079 | 19.41 | 7180 | 0.5888 | 0.7124 | 0.8137 | 0.8843 | 0.9329 | 0.7816 | 0.8718 | 0.5038 | 0.9018 | 0.8695 | 0.8345 | 0.8683 | 0.6695 | 0.7641 | 0.4233 | 0.8108 | 0.6851 | 0.7654 |
| 0.148 | 19.46 | 7200 | 0.6259 | 0.7126 | 0.8116 | 0.8847 | 0.9339 | 0.7905 | 0.8603 | 0.4939 | 0.9076 | 0.8582 | 0.8370 | 0.8676 | 0.6691 | 0.7635 | 0.4242 | 0.8122 | 0.6853 | 0.7662 |
| 0.1276 | 19.51 | 7220 | 0.6049 | 0.7123 | 0.8164 | 0.8839 | 0.9373 | 0.8043 | 0.8574 | 0.5181 | 0.8955 | 0.8636 | 0.8388 | 0.8660 | 0.6685 | 0.7649 | 0.4208 | 0.8100 | 0.6876 | 0.7680 |
| 0.0878 | 19.57 | 7240 | 0.5910 | 0.7150 | 0.8162 | 0.8857 | 0.9336 | 0.7844 | 0.8623 | 0.5235 | 0.9030 | 0.8534 | 0.8531 | 0.8663 | 0.6678 | 0.7644 | 0.4077 | 0.8092 | 0.7061 | 0.7838 |
| 0.2596 | 19.62 | 7260 | 0.5838 | 0.7179 | 0.8185 | 0.8870 | 0.9337 | 0.7965 | 0.8655 | 0.5124 | 0.9019 | 0.8616 | 0.8580 | 0.8673 | 0.6723 | 0.7672 | 0.4136 | 0.8109 | 0.7066 | 0.7875 |
| 0.0873 | 19.68 | 7280 | 0.5732 | 0.7253 | 0.8273 | 0.8899 | 0.9265 | 0.8041 | 0.8811 | 0.5270 | 0.9002 | 0.8660 | 0.8859 | 0.8680 | 0.6728 | 0.7687 | 0.4244 | 0.8108 | 0.7248 | 0.8074 |
| 0.2127 | 19.73 | 7300 | 0.5882 | 0.7235 | 0.8260 | 0.8892 | 0.9249 | 0.7916 | 0.8881 | 0.5309 | 0.9009 | 0.8635 | 0.8822 | 0.8671 | 0.6701 | 0.7647 | 0.4173 | 0.8107 | 0.7289 | 0.8057 |
| 0.1186 | 19.78 | 7320 | 0.5851 | 0.7234 | 0.8251 | 0.8891 | 0.9316 | 0.8028 | 0.8566 | 0.5462 | 0.9038 | 0.8595 | 0.8751 | 0.8667 | 0.6694 | 0.7657 | 0.4174 | 0.8112 | 0.7291 | 0.8043 |
| 1.8503 | 19.84 | 7340 | 0.6085 | 0.7138 | 0.8217 | 0.8839 | 0.9318 | 0.7936 | 0.8725 | 0.5484 | 0.8934 | 0.8761 | 0.8364 | 0.8661 | 0.6688 | 0.7681 | 0.4153 | 0.8074 | 0.7009 | 0.7702 |
| 1.9456 | 19.89 | 7360 | 0.6852 | 0.6999 | 0.8127 | 0.8757 | 0.9372 | 0.7887 | 0.8814 | 0.5482 | 0.8823 | 0.8730 | 0.7784 | 0.8593 | 0.6682 | 0.7665 | 0.4103 | 0.8020 | 0.6740 | 0.7186 |
| 0.0961 | 19.95 | 7380 | 0.6292 | 0.7030 | 0.8143 | 0.8781 | 0.9328 | 0.8002 | 0.8746 | 0.5323 | 0.8908 | 0.8770 | 0.7924 | 0.8628 | 0.6691 | 0.7645 | 0.4129 | 0.8055 | 0.6791 | 0.7270 |
| 0.2522 | 20.0 | 7400 | 0.6386 | 0.7016 | 0.8164 | 0.8770 | 0.9316 | 0.8003 | 0.8730 | 0.5499 | 0.8869 | 0.8828 | 0.7901 | 0.8627 | 0.6672 | 0.7661 | 0.4089 | 0.8034 | 0.6767 | 0.7262 |
| 0.1623 | 20.05 | 7420 | 0.6480 | 0.7035 | 0.8118 | 0.8792 | 0.9333 | 0.7838 | 0.8896 | 0.5216 | 0.8954 | 0.8669 | 0.7921 | 0.8637 | 0.6659 | 0.7665 | 0.4135 | 0.8092 | 0.6790 | 0.7267 |
| 0.1648 | 20.11 | 7440 | 0.6506 | 0.7023 | 0.8108 | 0.8781 | 0.9338 | 0.7922 | 0.8661 | 0.5337 | 0.8989 | 0.8618 | 0.7888 | 0.8626 | 0.6675 | 0.7630 | 0.4151 | 0.8075 | 0.6755 | 0.7248 |
| 0.1676 | 20.16 | 7460 | 0.6525 | 0.7020 | 0.8119 | 0.8779 | 0.9305 | 0.7910 | 0.8584 | 0.5395 | 0.8995 | 0.8708 | 0.7938 | 0.8637 | 0.6686 | 0.7612 | 0.4130 | 0.8071 | 0.6746 | 0.7261 |
| 0.1036 | 20.22 | 7480 | 0.6309 | 0.7013 | 0.8048 | 0.8789 | 0.9343 | 0.7790 | 0.8725 | 0.4928 | 0.9041 | 0.8599 | 0.7909 | 0.8628 | 0.6691 | 0.7629 | 0.4011 | 0.8099 | 0.6773 | 0.7257 |
| 0.0918 | 20.27 | 7500 | 0.6292 | 0.7024 | 0.8080 | 0.8788 | 0.9338 | 0.7875 | 0.8729 | 0.5081 | 0.9014 | 0.8603 | 0.7917 | 0.8623 | 0.6706 | 0.7616 | 0.4103 | 0.8102 | 0.6754 | 0.7265 |
| 0.2906 | 20.32 | 7520 | 0.6243 | 0.7052 | 0.8096 | 0.8807 | 0.9384 | 0.7924 | 0.8675 | 0.5085 | 0.9013 | 0.8650 | 0.7937 | 0.8651 | 0.6708 | 0.7639 | 0.4192 | 0.8150 | 0.6747 | 0.7275 |
| 0.184 | 20.38 | 7540 | 0.6176 | 0.7045 | 0.8137 | 0.8796 | 0.9276 | 0.8040 | 0.8813 | 0.5235 | 0.9042 | 0.8587 | 0.7963 | 0.8642 | 0.6698 | 0.7653 | 0.4154 | 0.8106 | 0.6755 | 0.7304 |
| 0.0804 | 20.43 | 7560 | 0.5853 | 0.7082 | 0.8151 | 0.8819 | 0.9305 | 0.8037 | 0.8768 | 0.5146 | 0.9027 | 0.8695 | 0.8082 | 0.8664 | 0.6690 | 0.7690 | 0.4176 | 0.8119 | 0.6826 | 0.7409 |
| 0.1249 | 20.49 | 7580 | 0.6032 | 0.7065 | 0.8139 | 0.8823 | 0.9403 | 0.7796 | 0.8905 | 0.5151 | 0.8912 | 0.8813 | 0.7996 | 0.8715 | 0.6676 | 0.7676 | 0.4128 | 0.8153 | 0.6776 | 0.7331 |
| 0.0544 | 20.54 | 7600 | 0.6024 | 0.7060 | 0.8151 | 0.8819 | 0.9372 | 0.8066 | 0.8847 | 0.5206 | 0.8982 | 0.8606 | 0.7977 | 0.8701 | 0.6694 | 0.7685 | 0.4089 | 0.8148 | 0.6788 | 0.7312 |
| 0.0863 | 20.59 | 7620 | 0.5935 | 0.7068 | 0.8137 | 0.8821 | 0.9401 | 0.7984 | 0.8703 | 0.5322 | 0.9012 | 0.8576 | 0.7960 | 0.8678 | 0.6710 | 0.7732 | 0.4074 | 0.8165 | 0.6812 | 0.7303 |
| 0.3792 | 20.65 | 7640 | 0.6329 | 0.7074 | 0.8157 | 0.8814 | 0.9301 | 0.8079 | 0.8703 | 0.5180 | 0.9023 | 0.8766 | 0.8045 | 0.8660 | 0.6709 | 0.7707 | 0.4139 | 0.8127 | 0.6815 | 0.7361 |
| 1.3909 | 20.7 | 7660 | 0.6040 | 0.7091 | 0.8183 | 0.8822 | 0.9332 | 0.7971 | 0.8748 | 0.5360 | 0.8952 | 0.8784 | 0.8134 | 0.8686 | 0.6701 | 0.7696 | 0.4179 | 0.8115 | 0.6806 | 0.7458 |
| 0.1323 | 20.76 | 7680 | 0.6181 | 0.7114 | 0.8177 | 0.8834 | 0.9331 | 0.7969 | 0.8679 | 0.5273 | 0.8970 | 0.8704 | 0.8316 | 0.8687 | 0.6708 | 0.7649 | 0.4197 | 0.8099 | 0.6853 | 0.7607 |
| 0.3078 | 20.81 | 7700 | 0.6129 | 0.7091 | 0.8135 | 0.8828 | 0.9315 | 0.7961 | 0.8618 | 0.5012 | 0.9016 | 0.8754 | 0.8267 | 0.8686 | 0.6687 | 0.7623 | 0.4146 | 0.8088 | 0.6837 | 0.7570 |
| 0.1756 | 20.86 | 7720 | 0.5944 | 0.7105 | 0.8148 | 0.8834 | 0.9331 | 0.7884 | 0.8776 | 0.5083 | 0.8976 | 0.8696 | 0.8294 | 0.8686 | 0.6702 | 0.7630 | 0.4171 | 0.8099 | 0.6866 | 0.7579 |
| 0.1629 | 20.92 | 7740 | 0.6044 | 0.7115 | 0.8185 | 0.8833 | 0.9312 | 0.8068 | 0.8659 | 0.5289 | 0.8995 | 0.8656 | 0.8313 | 0.8691 | 0.6721 | 0.7627 | 0.4218 | 0.8090 | 0.6857 | 0.7599 |
| 0.2369 | 20.97 | 7760 | 0.5983 | 0.7118 | 0.8121 | 0.8846 | 0.9318 | 0.7858 | 0.8817 | 0.4914 | 0.9058 | 0.8487 | 0.8392 | 0.8689 | 0.6706 | 0.7652 | 0.4127 | 0.8103 | 0.6880 | 0.7670 |
| 0.1395 | 21.03 | 7780 | 0.6233 | 0.7111 | 0.8117 | 0.8843 | 0.9363 | 0.7813 | 0.8855 | 0.4988 | 0.9000 | 0.8407 | 0.8391 | 0.8679 | 0.6704 | 0.7670 | 0.4118 | 0.8106 | 0.6841 | 0.7660 |
| 0.1381 | 21.08 | 7800 | 0.6096 | 0.7104 | 0.8142 | 0.8833 | 0.9312 | 0.7804 | 0.8797 | 0.5074 | 0.8994 | 0.8715 | 0.8295 | 0.8696 | 0.6709 | 0.7652 | 0.4156 | 0.8089 | 0.6843 | 0.7583 |
| 0.2 | 21.14 | 7820 | 0.6285 | 0.7089 | 0.8189 | 0.8817 | 0.9331 | 0.8010 | 0.8722 | 0.5470 | 0.8953 | 0.8697 | 0.8138 | 0.8692 | 0.6699 | 0.7668 | 0.4205 | 0.8083 | 0.6784 | 0.7489 |
| 0.1102 | 21.19 | 7840 | 0.6080 | 0.7149 | 0.8199 | 0.8853 | 0.9309 | 0.7885 | 0.8815 | 0.5360 | 0.9009 | 0.8618 | 0.8398 | 0.8702 | 0.6735 | 0.7681 | 0.4225 | 0.8126 | 0.6894 | 0.7681 |
| 0.1476 | 21.24 | 7860 | 0.6102 | 0.7093 | 0.8163 | 0.8824 | 0.9323 | 0.7863 | 0.8800 | 0.5283 | 0.8970 | 0.8715 | 0.8187 | 0.8690 | 0.6705 | 0.7657 | 0.4184 | 0.8098 | 0.6800 | 0.7518 |
| 0.0475 | 21.3 | 7880 | 0.6285 | 0.7078 | 0.8160 | 0.8811 | 0.9292 | 0.7978 | 0.8854 | 0.5321 | 0.9005 | 0.8560 | 0.8108 | 0.8682 | 0.6730 | 0.7670 | 0.4194 | 0.8072 | 0.6758 | 0.7439 |
| 0.1551 | 21.35 | 7900 | 0.6299 | 0.7080 | 0.8168 | 0.8816 | 0.9310 | 0.8004 | 0.8913 | 0.5193 | 0.8943 | 0.8660 | 0.8154 | 0.8688 | 0.6750 | 0.7644 | 0.4144 | 0.8080 | 0.6772 | 0.7482 |
| 0.2911 | 21.41 | 7920 | 0.6102 | 0.7114 | 0.8188 | 0.8835 | 0.9340 | 0.8066 | 0.8715 | 0.5397 | 0.8986 | 0.8502 | 0.8310 | 0.8685 | 0.6744 | 0.7685 | 0.4109 | 0.8103 | 0.6857 | 0.7611 |
| 0.1838 | 21.46 | 7940 | 0.5847 | 0.7129 | 0.8095 | 0.8853 | 0.9376 | 0.7645 | 0.8754 | 0.5018 | 0.9060 | 0.8405 | 0.8409 | 0.8675 | 0.6675 | 0.7661 | 0.4112 | 0.8104 | 0.6954 | 0.7721 |
| 0.1123 | 21.51 | 7960 | 0.5571 | 0.7188 | 0.8210 | 0.8875 | 0.9322 | 0.8042 | 0.8765 | 0.5155 | 0.9021 | 0.8626 | 0.8541 | 0.8688 | 0.6738 | 0.7691 | 0.4197 | 0.8137 | 0.7050 | 0.7815 |
| 0.1224 | 21.57 | 7980 | 0.5748 | 0.7154 | 0.8187 | 0.8858 | 0.9340 | 0.8043 | 0.8794 | 0.5081 | 0.8982 | 0.8640 | 0.8430 | 0.8683 | 0.6763 | 0.7656 | 0.4144 | 0.8116 | 0.6985 | 0.7733 |
| 0.0828 | 21.62 | 8000 | 0.6015 | 0.7069 | 0.8133 | 0.8809 | 0.9327 | 0.8017 | 0.8742 | 0.5108 | 0.8995 | 0.8676 | 0.8064 | 0.8677 | 0.6759 | 0.7646 | 0.4124 | 0.8071 | 0.6806 | 0.7400 |
| 0.141 | 21.68 | 8020 | 0.5877 | 0.7097 | 0.8177 | 0.8823 | 0.9341 | 0.8095 | 0.8788 | 0.5266 | 0.8962 | 0.8645 | 0.8142 | 0.8680 | 0.6759 | 0.7701 | 0.4151 | 0.8103 | 0.6850 | 0.7432 |
| 0.1893 | 21.73 | 8040 | 0.5991 | 0.7073 | 0.8117 | 0.8815 | 0.9321 | 0.7786 | 0.8743 | 0.5240 | 0.9050 | 0.8590 | 0.8086 | 0.8683 | 0.6719 | 0.7683 | 0.4125 | 0.8092 | 0.6828 | 0.7383 |
| 0.2463 | 21.78 | 8060 | 0.5847 | 0.7071 | 0.8146 | 0.8809 | 0.9335 | 0.8030 | 0.8688 | 0.5378 | 0.9031 | 0.8547 | 0.8014 | 0.8670 | 0.6737 | 0.7701 | 0.4110 | 0.8084 | 0.6848 | 0.7351 |
| 0.072 | 21.84 | 8080 | 0.6718 | 0.7055 | 0.8147 | 0.8801 | 0.9321 | 0.7943 | 0.8692 | 0.5318 | 0.8971 | 0.8763 | 0.8021 | 0.8681 | 0.6727 | 0.7667 | 0.4099 | 0.8069 | 0.6783 | 0.7359 |
| 0.0983 | 21.89 | 8100 | 0.6329 | 0.7064 | 0.8145 | 0.8809 | 0.9296 | 0.8044 | 0.8810 | 0.5115 | 0.8996 | 0.8656 | 0.8096 | 0.8680 | 0.6746 | 0.7654 | 0.4076 | 0.8073 | 0.6821 | 0.7394 |
| 0.2275 | 21.95 | 8120 | 0.6321 | 0.7061 | 0.8120 | 0.8811 | 0.9317 | 0.7865 | 0.8864 | 0.5106 | 0.9007 | 0.8606 | 0.8074 | 0.8682 | 0.6722 | 0.7664 | 0.4082 | 0.8085 | 0.6808 | 0.7386 |
| 0.1424 | 22.0 | 8140 | 0.6322 | 0.7065 | 0.8122 | 0.8815 | 0.9345 | 0.7917 | 0.8777 | 0.5021 | 0.8966 | 0.8682 | 0.8144 | 0.8680 | 0.6755 | 0.7669 | 0.4064 | 0.8106 | 0.6744 | 0.7437 |
| 0.1075 | 22.05 | 8160 | 0.6205 | 0.7062 | 0.8165 | 0.8809 | 0.9326 | 0.7969 | 0.8742 | 0.5308 | 0.8931 | 0.8755 | 0.8125 | 0.8685 | 0.6725 | 0.7675 | 0.4105 | 0.8101 | 0.6720 | 0.7423 |
| 0.088 | 22.11 | 8180 | 0.6247 | 0.7052 | 0.8112 | 0.8807 | 0.9349 | 0.7730 | 0.8796 | 0.5242 | 0.8976 | 0.8611 | 0.8081 | 0.8681 | 0.6701 | 0.7657 | 0.4104 | 0.8100 | 0.6723 | 0.7398 |
| 0.2148 | 22.16 | 8200 | 0.6264 | 0.7070 | 0.8140 | 0.8818 | 0.9315 | 0.8007 | 0.8831 | 0.5053 | 0.8995 | 0.8655 | 0.8125 | 0.8689 | 0.6747 | 0.7666 | 0.4100 | 0.8113 | 0.6751 | 0.7421 |
| 0.1816 | 22.22 | 8220 | 0.6494 | 0.7068 | 0.8132 | 0.8816 | 0.9284 | 0.7953 | 0.8800 | 0.5036 | 0.9019 | 0.8639 | 0.8192 | 0.8682 | 0.6744 | 0.7653 | 0.4074 | 0.8100 | 0.6745 | 0.7476 |
| 0.1059 | 22.27 | 8240 | 0.6124 | 0.7088 | 0.8150 | 0.8827 | 0.9289 | 0.8077 | 0.8816 | 0.5060 | 0.9042 | 0.8533 | 0.8235 | 0.8684 | 0.6774 | 0.7693 | 0.4074 | 0.8121 | 0.6767 | 0.7503 |
| 0.1387 | 22.32 | 8260 | 0.6376 | 0.7070 | 0.8149 | 0.8815 | 0.9340 | 0.7960 | 0.8797 | 0.5223 | 0.8968 | 0.8644 | 0.8110 | 0.8691 | 0.6763 | 0.7688 | 0.4060 | 0.8102 | 0.6746 | 0.7436 |
| 0.3907 | 22.38 | 8280 | 0.6208 | 0.7050 | 0.8143 | 0.8803 | 0.9302 | 0.7992 | 0.8879 | 0.5138 | 0.8979 | 0.8708 | 0.8002 | 0.8683 | 0.6746 | 0.7660 | 0.4076 | 0.8089 | 0.6741 | 0.7356 |
| 0.1376 | 22.43 | 8300 | 0.6203 | 0.7060 | 0.8132 | 0.8809 | 0.9299 | 0.7943 | 0.8723 | 0.5208 | 0.9032 | 0.8616 | 0.8101 | 0.8686 | 0.6754 | 0.7661 | 0.4084 | 0.8093 | 0.6742 | 0.7401 |
| 0.1202 | 22.49 | 8320 | 0.6072 | 0.7067 | 0.8132 | 0.8816 | 0.9304 | 0.8010 | 0.8882 | 0.5135 | 0.9045 | 0.8414 | 0.8133 | 0.8684 | 0.6759 | 0.7683 | 0.4057 | 0.8106 | 0.6761 | 0.7418 |
| 0.1391 | 22.54 | 8340 | 0.6402 | 0.7062 | 0.8132 | 0.8807 | 0.9283 | 0.8123 | 0.8665 | 0.5124 | 0.9065 | 0.8565 | 0.8098 | 0.8681 | 0.6780 | 0.7651 | 0.4054 | 0.8063 | 0.6798 | 0.7407 |
| 0.0847 | 22.59 | 8360 | 0.6071 | 0.7095 | 0.8180 | 0.8826 | 0.9312 | 0.8173 | 0.8706 | 0.5239 | 0.8995 | 0.8565 | 0.8268 | 0.8679 | 0.6782 | 0.7671 | 0.4064 | 0.8106 | 0.6819 | 0.7541 |
| 0.1487 | 22.65 | 8380 | 0.6051 | 0.7089 | 0.8160 | 0.8823 | 0.9335 | 0.8133 | 0.8686 | 0.5156 | 0.8993 | 0.8627 | 0.8187 | 0.8687 | 0.6788 | 0.7660 | 0.4086 | 0.8097 | 0.6796 | 0.7509 |
| 0.1683 | 22.7 | 8400 | 0.6426 | 0.7090 | 0.8136 | 0.8826 | 0.9356 | 0.7975 | 0.8708 | 0.5141 | 0.9002 | 0.8577 | 0.8192 | 0.8680 | 0.6773 | 0.7673 | 0.4095 | 0.8109 | 0.6792 | 0.7509 |
| 0.0552 | 22.76 | 8420 | 0.6230 | 0.7076 | 0.8156 | 0.8815 | 0.9319 | 0.7963 | 0.8687 | 0.5280 | 0.8982 | 0.8691 | 0.8166 | 0.8690 | 0.6761 | 0.7666 | 0.4095 | 0.8092 | 0.6753 | 0.7475 |
| 1.8276 | 22.81 | 8440 | 0.6169 | 0.7089 | 0.8159 | 0.8823 | 0.9345 | 0.8091 | 0.8806 | 0.5308 | 0.9019 | 0.8406 | 0.8138 | 0.8680 | 0.6786 | 0.7711 | 0.4098 | 0.8118 | 0.6761 | 0.7468 |
| 0.069 | 22.86 | 8460 | 0.6010 | 0.7101 | 0.8186 | 0.8829 | 0.9330 | 0.8154 | 0.8825 | 0.5262 | 0.8980 | 0.8535 | 0.8215 | 0.8685 | 0.6771 | 0.7699 | 0.4113 | 0.8121 | 0.6799 | 0.7517 |
| 0.2116 | 22.92 | 8480 | 0.6090 | 0.7095 | 0.8164 | 0.8825 | 0.9331 | 0.8052 | 0.8833 | 0.5195 | 0.8989 | 0.8588 | 0.8157 | 0.8685 | 0.6770 | 0.7690 | 0.4147 | 0.8113 | 0.6792 | 0.7469 |
| 0.2707 | 22.97 | 8500 | 0.6086 | 0.7098 | 0.8186 | 0.8822 | 0.9338 | 0.8160 | 0.8696 | 0.5288 | 0.8965 | 0.8734 | 0.8124 | 0.8682 | 0.6769 | 0.7711 | 0.4186 | 0.8113 | 0.6787 | 0.7437 |
| 1.9017 | 23.03 | 8520 | 0.6366 | 0.7075 | 0.8146 | 0.8810 | 0.9340 | 0.8076 | 0.8718 | 0.5253 | 0.9000 | 0.8598 | 0.8040 | 0.8671 | 0.6775 | 0.7683 | 0.4149 | 0.8084 | 0.6767 | 0.7397 |
| 0.5376 | 23.08 | 8540 | 0.6105 | 0.7078 | 0.8158 | 0.8810 | 0.9323 | 0.8165 | 0.8637 | 0.5240 | 0.9006 | 0.8655 | 0.8077 | 0.8673 | 0.6770 | 0.7679 | 0.4160 | 0.8078 | 0.6767 | 0.7421 |
| 0.1984 | 23.14 | 8560 | 0.6390 | 0.7063 | 0.8142 | 0.8799 | 0.9299 | 0.7995 | 0.8657 | 0.5335 | 0.9027 | 0.8696 | 0.7981 | 0.8675 | 0.6774 | 0.7673 | 0.4177 | 0.8058 | 0.6743 | 0.7341 |
| 0.1297 | 23.19 | 8580 | 0.6062 | 0.7109 | 0.8188 | 0.8826 | 0.9327 | 0.8114 | 0.8771 | 0.5381 | 0.9004 | 0.8581 | 0.8136 | 0.8674 | 0.6788 | 0.7736 | 0.4171 | 0.8114 | 0.6828 | 0.7454 |
| 0.1256 | 23.24 | 8600 | 0.6117 | 0.7098 | 0.8167 | 0.8820 | 0.9329 | 0.8076 | 0.8649 | 0.5336 | 0.9009 | 0.8606 | 0.8161 | 0.8680 | 0.6787 | 0.7690 | 0.4151 | 0.8084 | 0.6819 | 0.7475 |
| 0.2178 | 23.3 | 8620 | 0.6301 | 0.7063 | 0.8063 | 0.8813 | 0.9354 | 0.7755 | 0.8659 | 0.4904 | 0.9053 | 0.8642 | 0.8075 | 0.8668 | 0.6739 | 0.7661 | 0.4111 | 0.8087 | 0.6767 | 0.7409 |
| 0.12 | 23.35 | 8640 | 0.5954 | 0.7134 | 0.8183 | 0.8845 | 0.9305 | 0.8061 | 0.8892 | 0.5131 | 0.9020 | 0.8594 | 0.8282 | 0.8681 | 0.6795 | 0.7728 | 0.4139 | 0.8125 | 0.6910 | 0.7563 |
| 1.6866 | 23.41 | 8660 | 0.6285 | 0.7073 | 0.8171 | 0.8808 | 0.9342 | 0.8003 | 0.8867 | 0.5319 | 0.8915 | 0.8707 | 0.8042 | 0.8674 | 0.6780 | 0.7715 | 0.4115 | 0.8086 | 0.6755 | 0.7385 |
| 0.8764 | 23.46 | 8680 | 0.6196 | 0.7069 | 0.8179 | 0.8805 | 0.9319 | 0.8104 | 0.8803 | 0.5430 | 0.8968 | 0.8597 | 0.8030 | 0.8670 | 0.6774 | 0.7718 | 0.4108 | 0.8087 | 0.6755 | 0.7376 |
| 0.2846 | 23.51 | 8700 | 0.6429 | 0.7054 | 0.8089 | 0.8801 | 0.9338 | 0.7897 | 0.8778 | 0.5126 | 0.9054 | 0.8470 | 0.7960 | 0.8662 | 0.6773 | 0.7671 | 0.4131 | 0.8070 | 0.6741 | 0.7328 |
| 0.1733 | 23.57 | 8720 | 0.6716 | 0.7060 | 0.8137 | 0.8801 | 0.9312 | 0.7954 | 0.8775 | 0.5342 | 0.9026 | 0.8564 | 0.7982 | 0.8679 | 0.6766 | 0.7678 | 0.4128 | 0.8065 | 0.6765 | 0.7339 |
| 0.1889 | 23.62 | 8740 | 0.6285 | 0.7086 | 0.8183 | 0.8816 | 0.9327 | 0.8064 | 0.8815 | 0.5367 | 0.8958 | 0.8666 | 0.8085 | 0.8681 | 0.6788 | 0.7732 | 0.4114 | 0.8103 | 0.6773 | 0.7407 |
| 0.2335 | 23.68 | 8760 | 0.6187 | 0.7077 | 0.8179 | 0.8812 | 0.9290 | 0.8058 | 0.8765 | 0.5372 | 0.8993 | 0.8645 | 0.8128 | 0.8692 | 0.6764 | 0.7698 | 0.4107 | 0.8081 | 0.6760 | 0.7438 |
| 0.1434 | 23.73 | 8780 | 0.6220 | 0.7068 | 0.8185 | 0.8804 | 0.9287 | 0.8154 | 0.8699 | 0.5404 | 0.8982 | 0.8675 | 0.8094 | 0.8688 | 0.6771 | 0.7681 | 0.4102 | 0.8063 | 0.6748 | 0.7424 |
| 0.1091 | 23.78 | 8800 | 0.6053 | 0.7097 | 0.8177 | 0.8824 | 0.9306 | 0.7995 | 0.8773 | 0.5373 | 0.9012 | 0.8613 | 0.8165 | 0.8697 | 0.6777 | 0.7712 | 0.4128 | 0.8101 | 0.6800 | 0.7462 |
| 0.0441 | 23.84 | 8820 | 0.6099 | 0.7090 | 0.8185 | 0.8819 | 0.9339 | 0.8164 | 0.8647 | 0.5389 | 0.8970 | 0.8614 | 0.8173 | 0.8684 | 0.6783 | 0.7700 | 0.4109 | 0.8097 | 0.6781 | 0.7478 |
| 0.0642 | 23.89 | 8840 | 0.6071 | 0.7085 | 0.8166 | 0.8815 | 0.9342 | 0.8128 | 0.8618 | 0.5345 | 0.8997 | 0.8641 | 0.8094 | 0.8681 | 0.6780 | 0.7694 | 0.4140 | 0.8089 | 0.6784 | 0.7430 |
| 0.1659 | 23.95 | 8860 | 0.5899 | 0.7092 | 0.8138 | 0.8825 | 0.9322 | 0.8027 | 0.8705 | 0.4997 | 0.9026 | 0.8791 | 0.8101 | 0.8693 | 0.6781 | 0.7697 | 0.4154 | 0.8109 | 0.6780 | 0.7430 |
| 0.1801 | 24.0 | 8880 | 0.6425 | 0.7073 | 0.8119 | 0.8814 | 0.9312 | 0.7967 | 0.8706 | 0.5050 | 0.9036 | 0.8643 | 0.8116 | 0.8687 | 0.6768 | 0.7656 | 0.4120 | 0.8072 | 0.6767 | 0.7441 |
| 1.0472 | 24.05 | 8900 | 0.6368 | 0.7085 | 0.8126 | 0.8822 | 0.9331 | 0.8066 | 0.8658 | 0.5075 | 0.9060 | 0.8547 | 0.8142 | 0.8683 | 0.6777 | 0.7685 | 0.4122 | 0.8102 | 0.6772 | 0.7457 |
| 0.2152 | 24.11 | 8920 | 0.6309 | 0.7080 | 0.8133 | 0.8818 | 0.9332 | 0.8037 | 0.8715 | 0.5062 | 0.9010 | 0.8646 | 0.8131 | 0.8683 | 0.6775 | 0.7679 | 0.4117 | 0.8097 | 0.6756 | 0.7451 |
| 0.0434 | 24.16 | 8940 | 0.6352 | 0.7078 | 0.8145 | 0.8816 | 0.9339 | 0.7984 | 0.8811 | 0.5156 | 0.8965 | 0.8632 | 0.8130 | 0.8682 | 0.6770 | 0.7683 | 0.4116 | 0.8089 | 0.6758 | 0.7451 |
| 0.1156 | 24.22 | 8960 | 0.6161 | 0.7095 | 0.8189 | 0.8825 | 0.9301 | 0.8131 | 0.8746 | 0.5325 | 0.9002 | 0.8620 | 0.8200 | 0.8690 | 0.6765 | 0.7718 | 0.4122 | 0.8118 | 0.6779 | 0.7471 |
| 0.1483 | 24.27 | 8980 | 0.6298 | 0.7072 | 0.8137 | 0.8813 | 0.9332 | 0.8052 | 0.8798 | 0.5192 | 0.9018 | 0.8486 | 0.8082 | 0.8675 | 0.6770 | 0.7689 | 0.4101 | 0.8089 | 0.6762 | 0.7418 |
| 0.1716 | 24.32 | 9000 | 0.6275 | 0.7076 | 0.8139 | 0.8815 | 0.9328 | 0.8087 | 0.8726 | 0.5286 | 0.9052 | 0.8344 | 0.8148 | 0.8679 | 0.6765 | 0.7689 | 0.4102 | 0.8087 | 0.6753 | 0.7457 |
| 0.0314 | 24.38 | 9020 | 0.6365 | 0.7065 | 0.8143 | 0.8808 | 0.9320 | 0.8008 | 0.8773 | 0.5204 | 0.8994 | 0.8663 | 0.8041 | 0.8678 | 0.6762 | 0.7681 | 0.4103 | 0.8087 | 0.6759 | 0.7385 |
| 0.1488 | 24.43 | 9040 | 0.6421 | 0.7080 | 0.8135 | 0.8818 | 0.9341 | 0.8048 | 0.8730 | 0.5076 | 0.8997 | 0.8636 | 0.8120 | 0.8674 | 0.6777 | 0.7686 | 0.4114 | 0.8103 | 0.6769 | 0.7434 |
| 0.1511 | 24.49 | 9060 | 0.6465 | 0.7060 | 0.8128 | 0.8805 | 0.9321 | 0.7908 | 0.8723 | 0.5162 | 0.8979 | 0.8728 | 0.8077 | 0.8685 | 0.6759 | 0.7654 | 0.4117 | 0.8071 | 0.6732 | 0.7402 |
| 0.1712 | 24.54 | 9080 | 0.6452 | 0.7064 | 0.8146 | 0.8806 | 0.9298 | 0.7991 | 0.8749 | 0.5260 | 0.9005 | 0.8626 | 0.8096 | 0.8685 | 0.6755 | 0.7657 | 0.4104 | 0.8066 | 0.6762 | 0.7418 |
| 0.0786 | 24.59 | 9100 | 0.6148 | 0.7074 | 0.8186 | 0.8810 | 0.9290 | 0.8139 | 0.8767 | 0.5383 | 0.8993 | 0.8645 | 0.8088 | 0.8689 | 0.6755 | 0.7693 | 0.4112 | 0.8083 | 0.6780 | 0.7409 |
| 0.1503 | 24.65 | 9120 | 0.6358 | 0.7075 | 0.8137 | 0.8814 | 0.9325 | 0.7985 | 0.8723 | 0.5160 | 0.9009 | 0.8665 | 0.8091 | 0.8684 | 0.6763 | 0.7678 | 0.4128 | 0.8088 | 0.6769 | 0.7417 |
| 0.2824 | 24.7 | 9140 | 0.6273 | 0.7067 | 0.8133 | 0.8808 | 0.9353 | 0.7995 | 0.8688 | 0.5238 | 0.8989 | 0.8643 | 0.8028 | 0.8673 | 0.6767 | 0.7680 | 0.4128 | 0.8084 | 0.6761 | 0.7379 |
| 0.417 | 24.76 | 9160 | 0.6413 | 0.7065 | 0.8109 | 0.8807 | 0.9298 | 0.8041 | 0.8759 | 0.5149 | 0.9101 | 0.8339 | 0.8073 | 0.8674 | 0.6767 | 0.7664 | 0.4127 | 0.8058 | 0.6754 | 0.7408 |
| 0.0708 | 24.81 | 9180 | 0.6532 | 0.7076 | 0.8142 | 0.8813 | 0.9317 | 0.8104 | 0.8704 | 0.5146 | 0.9022 | 0.8588 | 0.8116 | 0.8681 | 0.6766 | 0.7668 | 0.4127 | 0.8076 | 0.6780 | 0.7437 |
| 0.1626 | 24.86 | 9200 | 0.6461 | 0.7077 | 0.8118 | 0.8816 | 0.9331 | 0.8007 | 0.8708 | 0.5031 | 0.9031 | 0.8606 | 0.8110 | 0.8682 | 0.6774 | 0.7663 | 0.4120 | 0.8079 | 0.6783 | 0.7435 |
| 0.0988 | 24.92 | 9220 | 0.6357 | 0.7069 | 0.8125 | 0.8809 | 0.9379 | 0.8022 | 0.8645 | 0.5140 | 0.8966 | 0.8692 | 0.8035 | 0.8664 | 0.6774 | 0.7685 | 0.4127 | 0.8093 | 0.6759 | 0.7382 |
| 0.1384 | 24.97 | 9240 | 0.6325 | 0.7077 | 0.8137 | 0.8816 | 0.9348 | 0.8020 | 0.8775 | 0.5017 | 0.8953 | 0.8739 | 0.8105 | 0.8677 | 0.6774 | 0.7684 | 0.4116 | 0.8094 | 0.6762 | 0.7429 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.17.1
- Tokenizers 0.13.3
|
danielnoumon/Techday-NLP-BERT-NewsGroupClassification | danielnoumon | 2024-02-22T13:43:31Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-22T13:43:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: distilbert-base-uncased
model-index:
- name: Techday-NLP-BERT-NewsGroupClassification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Techday-NLP-BERT-NewsGroupClassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Subsets and Splits